▲ | sorenjan 9 hours ago | |
You can use Ollama for serving a model locally, and Continue to use it in VSCode. | ||
▲ | syntaxing 9 hours ago | parent | next [-] | |
Relevant telemetry information. I didn’t like how they went from opt-in to opt-out earlier this year. | ||
▲ | homarp 4 hours ago | parent | prev [-] | |
you can do that with llama-server too |