▲ | fendy3002 19 hours ago | |
well I presented the statement wrongly. What I mean is the use case for LLM are trivial things, it shouldn't be expensive to operate and the 1 dollar cost for your case is heavily subsidized, that price won't hold up long assuming the computing power stays the same. | ||
▲ | killerstorm 13 hours ago | parent [-] | |
Cheaper models might be around $0.01 per request, and it's not subsidized: we see a lot of different providers offering open source models, which offer quality similar to proprietary ones. On-device generation is also an option now. For $1 I'm talking about Claude Opus 4. I doubt it's subsidized - it's already much more expensive than the open models. |