| ▲ | xnx 5 hours ago | |||||||
Gemini Flash 2.5 lite does 400 tokens/sec. Is there benefit to going faster than a person can read? | ||||||||
| ▲ | atls 3 hours ago | parent | next [-] | |||||||
There is also the use case of delegating tasks programmatically to an LLM, for example, transforming unstructured data to structured data. This task often can’t be done reliably without either 1. lots of manual work, or 2. intelligence, especially when the structure of the individual data pieces are unknown. Problems like these can be much more efficiently solved by LLMs, and if you imagine these programs are processing very large datasets, then sub-millisecond inference is crucial. | ||||||||
| ||||||||
| ▲ | xi_studio 2 hours ago | parent | prev | next [-] | |||||||
Agents already bypass human inference time, if it can input-output instantly it can also loop it generating near instantly long cached tasks | ||||||||
| ▲ | booli 4 hours ago | parent | prev | next [-] | |||||||
Agents also "read", so yes there is. Think about spinning up 10, 20, 100 sub agents for a small task and they all return near instant. That's the usecase, not the chatbot. | ||||||||
| ▲ | cheema33 4 hours ago | parent | prev [-] | |||||||
Yes. You can allow multiple people to use a single chip. A slower solution will be able to service far fewer users. | ||||||||
| ||||||||