Remix.run Logo
sorenjan 9 hours ago

You can use Ollama for serving a model locally, and Continue to use it in VSCode.

https://ollama.com/blog/continue-code-assistant

syntaxing 9 hours ago | parent | next [-]

Relevant telemetry information. I didn’t like how they went from opt-in to opt-out earlier this year.

https://docs.continue.dev/telemetry

homarp 4 hours ago | parent | prev [-]

you can do that with llama-server too