| ▲ | zokier 5 hours ago | ||||||||||||||||
For someone who is very out of the loop with these AI models, can someone explain what I can actually run on my 3080ti (12G)? Is this something like that or is this still too big; is there anything remotely useful runnable with my GPU? I have 64G RAM if that helps (?). | |||||||||||||||||
| ▲ | AlbinoDrought 4 hours ago | parent | next [-] | ||||||||||||||||
This model does not fit in 12G of VRAM - even the smallest quant is unlikely to fit. However, portions can be offloaded to regular RAM / CPU with a performance hit. I would recommend trying llama.cpp's llama-server with models of increasing size until you hit the best quality / speed tradeoff with your hardware that you're willing to accept. The Unsloth guides are a great place to start: https://unsloth.ai/docs/models/qwen3-coder-next#llama.cpp-tu... | |||||||||||||||||
| |||||||||||||||||
| ▲ | cirrusfan 4 hours ago | parent | prev [-] | ||||||||||||||||
This model is exactly what you’d want for your resources. GPU for prompt processing, ram for model weights and context length, and it being MoE makes it fairly zippy. Q4 is decent; Q5-6 is even better, assuming you can spare the resources. Going past q6 goes into heavily diminishing resources. | |||||||||||||||||