▲ | sorenjan 3 months ago | |
You can use Ollama for serving a model locally, and Continue to use it in VSCode. | ||
▲ | syntaxing 3 months ago | parent | next [-] | |
Relevant telemetry information. I didn’t like how they went from opt-in to opt-out earlier this year. | ||
▲ | freehorse 3 months ago | parent | prev | next [-] | |
Is autocomplete working well? | ||
▲ | homarp 3 months ago | parent | prev [-] | |
you can do that with llama-server too |