Remix.run Logo
canpan 8 hours ago

Recent models (Qwen 3.6 and Gemma) can really do coding locally. Feels like SOTA from maybe a year ago? But you would want about 32-40GB total memory. 24GB is just a bit short of that. A gaming PC with 16GB graphics card and 32GB RAM brings you very close to a usable coding system.

wktmeow 6 hours ago | parent | next [-]

That’s the exact ram/vram combo of my desktop - what model would you suggest for that gaming pc setup?

canpan 4 hours ago | parent [-]

I would recommend to start withQwen 3.6 35B at maybe Q5, it should be fast in that setup. For intelligence Qwen 3.7 27b, is smarter but will run much more slow. Others also mention gemma 4, which might be worth a try.

solenoid0937 6 hours ago | parent | prev | next [-]

> Feels like SOTA from maybe a year ago?

Agree but only for small projects. SOTA from a year ago still wins on larger projects

DrBenCarson 7 hours ago | parent | prev | next [-]

How are you using that RAM with the GPU?

canpan 7 hours ago | parent [-]

Llama.cpp with automatic offload to main memory. You can also use Ollama, it is easier, but slower.

reverius42 2 hours ago | parent [-]

For those who want a GUI, LM Studio does this too (with llama.cpp as the backend I think). I'm getting great (albeit slow) results with Qwen3.6-35B MoE on 8GB GPU RAM, 40GB system RAM.

ai_fry_ur_brain 7 hours ago | parent | prev [-]

"Coding system" "can really do coding locally"

Vibe coders out here thinking all software development is solved by because they made an (ugly and unoriginal) dashboard for their SaaS clone and their single column with 3x3 feature card landing page thats identical to every other vibe coders "startup"

2 hours ago | parent [-]
[deleted]