| ▲ | CuriouslyC 3 hours ago | |||||||
The only AI use case that cares about latency is interactive voice agents, where you ideally want <200ms response time, and 100ms of network latency kills that. For coding and batch job agents anything under 1s isn't going to matter to the user. | ||||||||
| ▲ | coredog64 25 minutes ago | parent | next [-] | |||||||
A customer service chatbot can require more than one LLM call per response to the point that latency anywhere in the system starts to show up as a degraded end-user experience. | ||||||||
| ▲ | electroly 3 hours ago | parent | prev [-] | |||||||
tbh, that's a good point about the voice agents that I hadn't considered. I guess there are some latency-sensitive inference workloads. Thanks for pointing that out. | ||||||||
| ||||||||