| ▲ | mark_l_watson a day ago | |
I experiment a lot with local models, great results for engineering tasks, less so for coding agents. I have used the following on a 32G MacMini to help write useful code: ollama launch claude --model qwen3.6:27b-coding-nvfp4 The problem is that running local models (except for engineering tasks like data munging) is slow. With the above setup I set up a task (asking for no user verification) and go for a walk to wait for results that my Gemini Ultra plan would produce in 10 seconds. | ||