▲ | acaloiar 2 days ago | |
Tokens are an implementation detail that have no business being part of product pricing. It's deliberate obfuscation. First, there's the simple math of converting tokens to dollars. This is easy enough; people are familiar with "credits". Credits can be obfuscation, but at least they're honest. The second and more difficult obfuscation to untangle is how one converts "tokens" to "value". When the value customers receive from tokens slips, they pay the same price for the service. But generative AI companies are under no obligation to refund anything, because the customer paid for tokens, and they got tokens in return. Customers have to trust that they're being given the highest quality tokens the provider can generate. I don't have that trust. Additionally, they have to trust that generative AI companies aren't padding results with superfluous tokens to hit revenue targets. We've all seen how much fluff is in default LLM responses. Pinky promises don't make for healthy business relationships. | ||
▲ | ibejoeb 2 days ago | parent | next [-] | |
Tokens aren't that much more opaque than RAM GB/s for functions or whatever. You'd have to know the entire infra stack to really understand it. I don't really have a suggestion for a better unit for that kind of stuff. | ||
▲ | mrcwinn 2 days ago | parent | prev [-] | |
Doesn’t prompt pricing obfuscate token costs by definition? I guess the alternative is everyone pays $500/mo. (And you’d still get more value than that.) |