| ▲ | 2ndorderthought 3 hours ago | |||||||
Qwen3.6 is brand new. But also, search engines are so plastered with AI slop that is written by tools and companies that have no interest in you using local models. Ollama makes it 1 command to run local small models, but with the newest ones there can be kinks to work out first. /R/localllama is okay for some information but beyond that there is so much noise and very little signal. I think it's intentional. | ||||||||
| ▲ | mft_ 2 hours ago | parent [-] | |||||||
Thanks. I’ve been experimenting with local models for over a year now, on and off, so this isn’t just limited to the latest Qwen. Anyway, I have no problem running them, but there’s a huge difference between running something via a chat interface and running it a la Claude Code so that it can interact with the local environment and create/edit files. This is the aspect that’s difficult, in my experience. | ||||||||
| ||||||||