|
| ▲ | NortySpock 3 hours ago | parent | next [-] |
| I tried the Zed editor and it picked up Ollama with almost no fiddling, so that has allowed me to run Qwen3.5:9B just by tweaking the ollama settings (which had a few dumb defaults, I thought, like assuming I wanted to run 3 LLMs in parallel, initially disabling Flash Attention, and having a very short context window...). Having a second pair of "eyes" to read a log error and dig into relevant code is super handy for getting ideas flowing. |
|
| ▲ | AstroBen 3 hours ago | parent | prev | next [-] |
| It looks like Copilot has direct support for Ollama if you're willing to set that up: https://docs.ollama.com/integrations/vscode For LM Studio under server settings you can start a local server that has an OpenAI-compatible API. You'd need to point Copilot to that. I don't use Copilot so not sure of the exact steps there |
|
| ▲ | brcmthrowaway 3 hours ago | parent | prev [-] |
| Basically LM Studio has a server that serves models over HTTP (localhost). Configure/enable the server and connect OpenCode to it. Try this article
https://advanced-stack.com/fields-notes/qwen35-opencode-lm-s... I'm looking for an alternative to OpenCode though, I can barely see the UI. |
| |