▲ | 0cf8612b2e1e 7 days ago | |||||||||||||||||||||||||||||||
During times of high utilization, how do they handle more requests than they have hardware? Is the software granular enough that they can round robin the hardware per token generated? UserA token, then UserB, then UserC, back to UserA? Or is it more likely that everyone goes into a big FIFO processing the entire request before switching to the next user? I assume the former has massive overhead, but maybe it is worthwhile to keep responsiveness up for everyone. | ||||||||||||||||||||||||||||||||
▲ | cornholio 7 days ago | parent | next [-] | |||||||||||||||||||||||||||||||
Inference is essentially a very complex matrix algorithm run repeatedly on itself, each time the input matrix (context window) is shifted and the new generated tokens appended to the end. So, it's easy to multiplex all active sessions over limited hardware, a typical server can hold hundreds of thousands of active contexts in the main system ram, each less than 500KB and ferry them to the GPU nearly instantaneously as required. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | computomatic 7 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
This is great product design at its finest. First of all, they never “handle more requests than they have hardware.” That’s impossible (at least as I’m reading it). The vast majority of usage is via their web app (and free accounts, at that). The web app defaults to “auto” selecting a model. The algorithm for that selection is hidden information. As load peaks, they can divert requests to different levels of hardware and less resource hungry models. Only a very small minority of requests actually specify the model to use. There are a hundred similar product design hacks they can use to mitigate load. But this seems like the easiest one to implement. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | the8472 7 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
During peaks they can kick out background jobs like model training or API users doing batch jobs. | ||||||||||||||||||||||||||||||||
▲ | vikramkr 6 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||
In addition to stuff like that they also handle it with rate limits, that message that Claude would throw almost all the time when they were like "demand is high so you have automatically switched to concise mode", making batch inference cheaper for API customers to convince them to use that instead of real time replies. The site erroring out during a period of high demand also works, prioritizing business customers during a rollout, the service degrading. It's not like any provider has a track record for effortlessly keeping responsiveness super high. Usually it's more the opposite. |