Remix.run Logo
brushfoot 5 hours ago

Not paying per token? Not sending my code to someone else's servers for inference? That's the stuff of sweet dreams for a stingy, paranoid solopreneur like me.

If I could run a local model comparable to even Sonnet 4.6 without shelling out $50K in hardware, I'd do it in a heartbeat. But all I have is a 32 GB of RAM and an old RTX 4080.

Or am I not up to speed? Are there decent coding models that can run on dev laptops? Not that that's what you were suggesting by recommending a local model, necessarily; just curious.

robertkarl 2 hours ago | parent [-]

I am trying to figure this out too... what I am seeing is that the local models like Qwen 3.5 family that fit on hardware like yours handle ambiguity poorly. But are capable of emitting complete apps too.

That, and they have tool use issues.... https://www.reddit.com/r/LocalLLM/comments/1smzw6s/qwen35_a3...

I would check out the model mentioned in that thread, GGUF unsloth/qwen3.5-35b-a3b on Q4_K_M

frabcus an hour ago | parent [-]

Qwen 3.6 is out now and a touch better than 3.5.

I'm finding Google's Gemma 4 even better though - seems to hold up the agentic loop better than Qwen.

All will load into 20Gb of VRAM. None are amazing, but they do just about work.