Remix.run Logo
thecolorblue 6 days ago

I just ran a test giving the same prompt to claude, gemini, grok and qwen3 coder running locally. Qwen did great by last years standards, and was very useful in building out boilerplate code. That being said, if you looked at the code side by side with cloud hosted models, I don't think anyone would pick Qwen.

If you have 32gb of memory you are not using, it is worth running for small tasks. Otherwise, I would stick with a cloud hosted model.

blackoil 5 days ago | parent | next [-]

That should remain true for foreseeable future. A 30b model can't beat 300b. Running 300b model locally is prohibitively expensive. By time it would be feasible cloud will also move to 10x larger model.

apitman 3 days ago | parent | prev | next [-]

At 4 bit quantization the weights only take half the RAM. You need a good chunk for context as well, but in my limited testing Qwen3-30B rand well on a single RTX 3090 (24GB VRAM).

dcreater 5 days ago | parent | prev [-]

Please share the results