| ▲ | scosman 3 days ago | ||||||||||||||||||||||||||||
What are folks motivation for using local coding models? Is it privacy and there's no cloud host you trust? I love local models for some use cases. However for coding there is a big gap between the quality of models you can run at home and those you can't (at least on hardware I can afford) like GLM 4.6, Sonnet 4.5, Codex 5, Qwen Coder 408. What makes local coding models compelling?  | |||||||||||||||||||||||||||||
| ▲ | realityfactchex 3 days ago | parent | next [-] | ||||||||||||||||||||||||||||
> compelling >> motivation It's the only way to be sure it's not being trained on. Most people never come up with any truly novel ideas to code. That's fine. There's no point in those people not submitting their projects to LLM providers. This lack of creativity is so prevalent, that many people believe that it is not possible to come up with new ideas (variants: it's all been tried before; or: it would inevitably be tried by someone else anyway; or: people will copy anyway). Some people do come up with new stuff, though. And (sometimes) they don't want to be trained on. That is the main edge IMO, for running local models. In a word: competition. Note, this is distinct from fearing copying by humans (or agents) with LLMs at their disposal. This is about not seeding patterns more directly into the code being trained on. Most people would say, forget that, just move fast and gain dominance. And they might not be wrong. Time may tell. But the reason can still stand as a compelling motivation, at least theoretically. Tangential: IANAL, but I imagine there's some kind of parallel concept around code/concept "property ownership". If you literally send your code to a 3P LLM, I'm guessing they have rights to it and some otherwise handwavy (quasi important) IP ownership might become suspect. We are possibly in a post-IP world (for some decades now depending on who's talking), but not everybody agrees on that currently, AFAICT.  | |||||||||||||||||||||||||||||
  | |||||||||||||||||||||||||||||
| ▲ | jckahn 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
I don't ever want to be dependent on a cloud service to be productive, and I don't want to have to pay money to experiment with code. Paying money for probabilistically generated tokens is effectively gambling. I don't like to gamble.  | |||||||||||||||||||||||||||||
  | |||||||||||||||||||||||||||||
| ▲ | voakbasda 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
Zero trust in remote systems run by others with unknowable or questionable motives.  | |||||||||||||||||||||||||||||
  | |||||||||||||||||||||||||||||
| ▲ | zargon 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
Another reason along with the others is that the output quality of the top commercial models varies wildly with time. They start strong and then deteriorate. The providers keep changing the model and/or its configuration without changing the name. With a local open weights model, you can learn each model's strengths and it can't be taken away with an update.  | |||||||||||||||||||||||||||||
| ▲ | brailsafe 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
I don't run any locally, but when I was thinking about investing in a setup, it would just be to have the tool offline. I haven't found the online subscription models to be sufficiently and frequently useful enough beyond occasional random tedious implementations that I'd consider investing in either online or offline LLMs long-term, and I've reverted back to normal programming for the most part, since it just keeps me more engaged.  | |||||||||||||||||||||||||||||
  | |||||||||||||||||||||||||||||
| ▲ | garethsprice 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
It's fun for me. This is a good enough reason to do anything. I learn a lot about how LLMs work and how to work with them. I can also ask my dumbest questions to a local model and get a response faster, without burning tokens that count towards usage limits on the hosted services I use for actual work. Definitely a hobby-category activity though, don't feel you're missing out on some big advantage (yet, anyway) unless you feel a great desire to set fire to thousands of dollars in exchange for spending your evenings untangling CUDA driver issues and wondering if that weird smell is your GPU melting. Some people are into that sort of thing, though.  | |||||||||||||||||||||||||||||
| ▲ | johnisgood 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
What setup would you (or other people) recommend for a local model, and which model, if I want something like Claude Sonnet 4.5 (or actually, earlier versions, which seemed to be better)? Anyone could chime in! I just want to have working local model that is at least as good as Sonnet 4.5, or 3.x.  | |||||||||||||||||||||||||||||
  | |||||||||||||||||||||||||||||
| ▲ | nprateem 3 days ago | parent | prev [-] | ||||||||||||||||||||||||||||
Deep-seated paranoia, delusions of grandeur, bragging rights, etc, etc.  | |||||||||||||||||||||||||||||