Remix.run Logo
littlestymaar 3 hours ago

How do you think that works?!

With the exception of diffusion language models that don't work this way, but are very niche, language models are autoregressive, which means you indeed need to process token in order.

And that's why model speed is such a big deal, you can't just throw more hardware at the problem because the problem is latency, not compute.