| ▲ | hacker_homie 2 days ago | |||||||
run local models | ||||||||
| ▲ | mark_l_watson a day ago | parent | next [-] | |||||||
I experiment a lot with local models, great results for engineering tasks, less so for coding agents. I have used the following on a 32G MacMini to help write useful code: ollama launch claude --model qwen3.6:27b-coding-nvfp4 The problem is that running local models (except for engineering tasks like data munging) is slow. With the above setup I set up a task (asking for no user verification) and go for a walk to wait for results that my Gemini Ultra plan would produce in 10 seconds. | ||||||||
| ▲ | SchemaLoad 2 days ago | parent | prev [-] | |||||||
You need massively expensive hardware to run them, and they aren't as good. It's pretty clear the base price of AI tools is way higher than we are being charged right now. | ||||||||
| ||||||||