| ▲ | cyanydeez 5 hours ago |
| Sounds like you're a candidate for a local model. It's kinda nice not caring what the token count means except as to compaction. |
|
| ▲ | brushfoot 5 hours ago | parent | next [-] |
| Not paying per token? Not sending my code to someone else's servers for inference? That's the stuff of sweet dreams for a stingy, paranoid solopreneur like me. If I could run a local model comparable to even Sonnet 4.6 without shelling out $50K in hardware, I'd do it in a heartbeat. But all I have is a 32 GB of RAM and an old RTX 4080. Or am I not up to speed? Are there decent coding models that can run on dev laptops? Not that that's what you were suggesting by recommending a local model, necessarily; just curious. |
| |
| ▲ | robertkarl 2 hours ago | parent [-] | | I am trying to figure this out too... what I am seeing is that the local models like Qwen 3.5 family that fit on hardware like yours handle ambiguity poorly. But are capable of emitting complete apps too. That, and they have tool use issues.... https://www.reddit.com/r/LocalLLM/comments/1smzw6s/qwen35_a3... I would check out the model mentioned in that thread, GGUF unsloth/qwen3.5-35b-a3b on Q4_K_M | | |
| ▲ | frabcus an hour ago | parent [-] | | Qwen 3.6 is out now and a touch better than 3.5. I'm finding Google's Gemma 4 even better though - seems to hold up the agentic loop better than Qwen. All will load into 20Gb of VRAM. None are amazing, but they do just about work. |
|
|
|
| ▲ | kanemcgrath 5 hours ago | parent | prev [-] |
| I do love using local models when I can, but qwen-35B is the best model I can run, and while its an insanely good local model, it does not compare to the big ones. |
| |
| ▲ | deaux 5 hours ago | parent [-] | | Have you tried the latest Gemma? You might prefer it to Qwen, depending on what you're doing. | | |
| ▲ | kanemcgrath 4 hours ago | parent [-] | | I did, but in almost everything I tried even qwen3.5 was better, and 3.6 was a huge step up. |
|
|