| ▲ | Aurornis 7 hours ago | |
Reading the details, he is using 2-bit quantization and reduced the number of experts per token from 10 down to 4 to get 5 tokens/sec. Cool proof of concept but it’s far from the quality and performance of the 397B model as normally used. Dropping the number of experts is particularly misleading. This is some interesting work, but applying such extreme measures to LLMs to get them to run severely degrades quality. I know he claims negligible quality loss, but in my experience 2-bit quantizations are completely useless for real work. You can get them to respond to prompts, but they lose their intelligence and will go around in circles. He also shows 5-6 tokens per second. Again that’s impressive for a large model on limited hardware but it’s very slow. Between the severely degraded model abilities and the extremely slow output the 397B result should be considered an attempt at proving something can technically run, not evidence that it can run well and produce output you’d expect from a 397B model. He even mentions the obvious problems with his changes: > *2-bit quantization produces \name\ instead of "name" in JSON output, making tool calling unreliable. So right out of the gate this isn’t useful if you want to do anything with it. He could have tried smaller models or less quantizations to get actual useful output from the model, but it wouldn’t look as impressive. It’s honestly getting kind of exhausting to read all of these AI-coded (admitted in the link) and AI-written papers made more for resume building. It would have been interesting to see this work applied to running a useful model that hadn’t been lobotomized instead of applying tricks to get an impressive headline but useless output. | ||
| ▲ | 190n 2 hours ago | parent | next [-] | |
> *2-bit quantization produces \name\ instead of "name" in JSON output, making tool calling unreliable. I was wondering about that statement. Shouldn't it restrict sampling to only tokens that produce valid JSON matching the schema during a tool call? On the other hand, I have heard a lot about how even production LLM providers don't always call tools accurately, so I suppose either it's hard to implement what I described or there's something I haven't thought of that makes it impossible. | ||
| ▲ | kageroumado 6 hours ago | parent | prev [-] | |
[dead] | ||