| ▲ | mft_ 3 hours ago | ||||||||||||||||
I’m frequently surprised how little I can find online about exactly this - different harnesses for local models and how to set them up. The documentation for opencode with local models is (IMO) pretty bad - and even Claude Opus (!) struggled to get it running. And so far I’ve not found a decent alternative to Claude Desktop. (I’ve recently discovered that you can pipe local models into Claude’s Code and Desktop, so this is on my list to try). | |||||||||||||||||
| ▲ | 2ndorderthought 3 hours ago | parent [-] | ||||||||||||||||
Qwen3.6 is brand new. But also, search engines are so plastered with AI slop that is written by tools and companies that have no interest in you using local models. Ollama makes it 1 command to run local small models, but with the newest ones there can be kinks to work out first. /R/localllama is okay for some information but beyond that there is so much noise and very little signal. I think it's intentional. | |||||||||||||||||
| |||||||||||||||||