Remix.run Logo
Surpassing vLLM with a Generated Inference Stack(infinity.inc)
55 points by lukebechtel a day ago | 16 comments
ntonozzi 21 hours ago | parent | next [-]

Why do they need to run benchmarks to confirm performance? Can't they run an example prompt and verify they get the exact same output token probabilities for all prompts? The fact that they are not doing this makes me suspicious that they are in fact not doing the exact same thing as vLLM.

It is also a bit weird that they are not incorporating speculative decoding, that seems like a critical performance optimization, especially for decode heavy workloads.

lukebechtel 21 hours ago | parent | next [-]

Yes, speculative decoding will make both us and VLLM faster, but we believe it would be a relatively even bump on both sides, so we didn't include it in this comparison. Worth another test!

jeeeb 7 hours ago | parent | prev [-]

> It is also a bit weird that they are not incorporating speculative decoding

Wouldn’t speculative decoding decrease overall throughput, but optimise (perceived) responsiveness?

YetAnotherNick 6 hours ago | parent [-]

For compute bound region(high batch size) yes, but for low batch size it could improve the throughput.

2001zhaozhao 9 hours ago | parent | prev | next [-]

Every example like this makes it obvious that you can now use ML-like optimization approaches on well-specified, very-well-tested software problems with a clear optimization goal. Keep if it improves the objective while maintaining correctness, discard if it doesn't. AI-descent strikes again.

Maybe I should learn more about ML to have a better instinct on optimization methods in general, so I can actually build AI optimizers like these.

storus 18 hours ago | parent | prev | next [-]

Does it support paged attention like vLLM though? Without that they will run into memory fragmentation quickly.

lukebechtel 18 hours ago | parent [-]

Yes, great question!

The system started without paged attention, and recreated its own paged attention implementation automatically once it realized it was a bottleneck.

Pretty cool!

hoerzu 5 hours ago | parent | prev | next [-]

What's the jitter what's the std? What about 1:1 output equality?

What's the post request latency of this part? What the ftt?

rfw300 21 hours ago | parent | prev | next [-]

OK... we need way more information than this to validate this claim! I can run Qwen-8B at 1 billion tokens per second if you don't check the model's output quality. No information is given about the source code, correctness, batching, benchmark results, quantization, etc. etc. etc.

lukebechtel 21 hours ago | parent [-]

We validate with MMLU and Hellaswag presently, and are getting this independently verified by a 3rd party.

We have considered open-sourcing some of our optimized inference libraries in the future, but have not yet come to a decision on this.

Also if you need a rough intuition as to why this is possible: it's because this entire inference stack was built for exactly one model, and thus we can really tune the entire framework accordingly.

rfw300 15 hours ago | parent [-]

I've no problem with the intuition. But I would hope for a lot more focus in the marketing materials on proving the (statistical) correctness of the implementation. 15% better inference speed is not worth it to use a completely unknown inference engine not tested across a wide range of generation scenarios.

LuxBennu 2 hours ago | parent | next [-]

rfw300 nails it. In production LLM serving, correctness at the tail matters more than median throughput. MMLU and Hellaswag validate general capability, but they don't catch subtle issues like KV cache corruption under high concurrency or numerical drift accumulating over long generations. The single-model specialization approach is smart in theory — the question is whether the generated stack handles all the edge cases that vLLM has fixed over thousands of issues and PRs. 15% throughput gain means nothing if you get one garbled response per thousand requests.

lukebechtel 13 hours ago | parent | prev [-]

This is a fair critique! We plan to use our system to generate many more inference libraries of this nature, and I'll make it a point to release better, broader correctness measures when we do so.

ismailmaj 5 hours ago | parent | prev | next [-]

Any place we can find the code?

acuozzo 19 hours ago | parent | prev [-]

Luke: Do you have benchmarks for BF16?

lukebechtel 19 hours ago | parent [-]

Unfortunately, not at present; we went for FP8 because we believed it was generally the best tradeoff of quality and speed. Allowed faster iteration as well.

We believe our improvements would hold on BF16, but let me check.