| ▲ | sorenjan a year ago | |
You can use Ollama for serving a model locally, and Continue to use it in VSCode.  | ||
| ▲ | syntaxing a year ago | parent | next [-] | |
Relevant telemetry information. I didn’t like how they went from opt-in to opt-out earlier this year.  | ||
| ▲ | freehorse a year ago | parent | prev | next [-] | |
Is autocomplete working well?  | ||
| ▲ | homarp a year ago | parent | prev [-] | |
you can do that with llama-server too  | ||