Remix.run Logo
unethical_ban 4 days ago

I still can't figure out how to set up a completely free, completely private/no-accounts method of connecting an IDE to LM Studio. I thought it would be "Continue" extension for VS Code, but even for local LM integration it insists I sign-in to their service before continuing.

mikestaas 4 days ago | parent | next [-]

Roo code in vs code, and qwen coder in lm studio is a decent local only combo.

omneity 4 days ago | parent [-]

Strongly seconding Roo Code. I am using it in VSCodium and it's the perfect partner for a fully local coding workflow (nearly 100% open-source too so no vendor is going to pry it from my hand, "ever").

Qwen Coder 30B is my main driver in this configuration and in my experience is quite capable. It runs at 80 tok/s on my M3 Max and I'm able to use it for about 30-50% of my coding tasks, the most menial ones. I am exploring ways to RL its approach to coding so it fits my style a bit more and it's a very exciting prospect whenever I manage to figure it out.

The missing link is autocomplete since Roo only solves the agent part. Continue.dev does a decent job at that but you really want to pair it with a high performance, large context model (so it fits multiple code sections + your recent changes + context about the repo and gives fast suggestions) and that doesn't seem feasible or enjoyable yet in a fully local setup.

unethical_ban 4 days ago | parent [-]

Thanks to both for recommending roo, it is the closest I've gotten. I still can't get it to work the way I expect.

When I use qwen coder 30B directly to create a small demo web page, it gives me apl the files and filenames. When I do the same thing in roo chat (set to coder) and it runs around in circles, doesn't build multiple files and eventually crashes out.

maxsilver 4 days ago | parent | prev | next [-]

Both Roo and Continue support local modals (via LM Studio). For Continue, you add a fake account (type in literally anything) and then click 'edit' -- it will take you to the settings JSON, and you can type in LM Studio as your source.

The main problem I'm seeing, is that a lot of the tooling doesn't work as well "agentically" with the models. (Most of these tools say something like 'works best with Claude, tested with Claude, good luck with any local models'). The local models via LM Studio already works really well for pure chat, but occasionally trip up semi-regularly on basic things, like writing files or running commands -- stuff that say, GitHub Copilot has mostly already polished.

But those are basically just bugs in tooling that will likely get fixed. The local-only setup is behind the current commercial market -- but not much behind.

I strongly agree with the commenter above, if the commercial models and tooling slow down at any point, the free/open models and tooling will absolutely catch up -- I'd guess within 9 months or so.

taneq 4 days ago | parent | prev [-]

Huh? I have Continue on Codium talking to ollama, all local, and I never signed up to nuffin’