▲ | linuxftw 3 days ago | |||||||
That's true only in a vacuum. For example, should I run gpt-oss-20b unquantized or gpt-oss-120b quantaized? Some models have a 70b/30b spread, and that's only across a single base model, where many different models exist at different quants could be compared for different tasks. | ||||||||
▲ | jkingsman 3 days ago | parent | next [-] | |||||||
Definitely. As a hobbyist, I have yet to put together a good heuristic for better-quant-lower-params vs. smaller-quant-high-params. I've mentally been drawing the line at around q4, but now with IQ quants and improvements in the space I'm not so sure anymore. | ||||||||
| ||||||||
▲ | danielhanchen 3 days ago | parent | prev [-] | |||||||
For GPT OSS in particular, OpenAI only released the MoEs in MXFP4 (4bit), so the "unquantized" version is 4bit MoE + 16bit attention - I uploaded "16bit" versions to https://huggingface.co/unsloth/gpt-oss-120b-GGUF, and they use 65.6GB whilst MXFP4 uses 63GB, so it's not that much difference - same with GPT OSS 20B llama.cpp also unfortunately cannot quantize matrices that are not a multiple of 256 (2880) |