| ▲ | blackmanta 11 hours ago | |||||||
With a nvidia spark or 128gb+ memory machine, you can get a good speed up on the 31B model if you use the 26B MoE as a draft model. It uses more memory but I’ve seen acceptance rate at around 70%+ using Q8 on both models | ||||||||
| ▲ | foobar10000 10 hours ago | parent [-] | |||||||
1 token ahead or 2? It's interesting - imo we'll soon have draft models specifically post-trained for denser, more complicated models. Wouldn't be surprised if diffusion models made a comeback for this - they can draft many tokens at once, and learning curves seem to top out at 90+% match for auto-regressive ones so quite interesting.. | ||||||||
| ||||||||