| ▲ | SlinkyOnStairs 3 hours ago | |
A fundamental architectural problem is that they genuinely do not know what a query will cost ahead of time. Even for a single standalone LLM that's the case, and the 'agentic' layers thrown on top just make that problem exponentially worse. One'd need to entirely switch away from LLMs to fix this problem. | ||
| ▲ | babyshake 3 hours ago | parent | next [-] | |
Isn't this an orthogonal issue that doesn't affect whether billing is done with credits or money? | ||
| ▲ | zozbot234 an hour ago | parent | prev | next [-] | |
If the expensive parts of the query happen to work iteratively (especially if agentic), you can act on those loops to bound the cost. Even if it's pure forward generation, you could pause an expensive inference and continue it seamlessly with a cheaper model, adding little to the cost. | ||
| ▲ | tatrions an hour ago | parent | prev | next [-] | |
[dead] | ||
| ▲ | tatrions 2 hours ago | parent | prev [-] | |
[dead] | ||