| ▲ | dec0dedab0de 3 hours ago | |
most of the llm tooling can handle different models. Ollama makes it easy to install and run different models locally. So you can configure aider or vscode or whatever you're using to connect to chatgpt to point to your local models instead. None of them are as good as the big hosted models, but you might be surprised at how capable they are. I like running things locally when I can, and I also like not worrying about accidentally burning through tokens. I think the future is multiple locally run models that call out to hosted models when necessary. I can imagine every device coming with a base model and using loras to learn about the users needs. With companies and maybe even households having their own shared models that do heavier lifting. while companies like openai and anhtropic continue to host the most powerful and expensive options. | ||
| ▲ | roboror 3 minutes ago | parent [-] | |
What models have you found capable? I was recently recommended Qwen3 Coder Next and I did not find it very successful. I have a good amount of VRAM/RAM so would love to run something locally. | ||