| ▲ | kpw94 4 hours ago | |
On my 32GB Ryzen desktop (recently upgraded from 16GB before the RAM prices went up another +40%), did the same setup of llama.cpp (with Vulkan extra steps) and also converged on Qwen3-Coder-30B-A3B-Instruct (also Q4_K_M quantization) On the model choice: I've tried latest gemma, ministral, and a bunch of others. But qwen was definitely the most impressive (and much faster inference thanks to MoE architecture), so can't wait to try Qwen3.5-35B-A3B if it fits. I've no clue about which quantization to pick though ... I picked Q4_K_M at random, was your choice of quantization more educated? | ||
| ▲ | zargon an hour ago | parent | next [-] | |
Quant choice depends on your vram, use case, need for speed, etc. For coding I would not go below Q4_K_M (though for Q4, unsloth XL or ik_llama IQ quants are usually better at the same size). Preferably Q5 or even Q6. | ||
| ▲ | NamlchakKhandro an hour ago | parent | prev [-] | |
[flagged] | ||