Remix.run Logo
NitpickLawyer 10 hours ago

API prices are most likely not subsidised. A brief look at openrouter can tell you that. There are plenty of providers that have 0 reason to subsidise that sell models at roughly the same average price. So the model works for them (or they wouldn't do it otherwise).

ai_fry_ur_brain 9 hours ago | parent [-]

They are subsidized, heavily. This is simple math, there are lots of reasons to subsidize. Please go look up the hardware requirements to run your favorite model and a given tok/ps then multiple that by 86400 (seconds in a day) then divide that by 1mm and multiple by the $ per mm tokens, then ask yourself if there's any possibility they could be profitable or even close to break even.

You are going off vibes alone, this is easily verified, please go verify.

What makes you think they have zero reason to subsidize, because the providers aren't a household names you assume they wouldn't operate at a loss? Whats your logic here? You make no sense.

hibikir 5 hours ago | parent | next [-]

The amounts of API tokens many large companies are using through, say AWS bedrock are quite high. We've seen leaks on the bills for real world use cases. It's not unreasonable to see normal individual subscriptions as possibly subsidized.... but do we think someone like Anthropic is going to be subsidizing 7, 8, or even 9 figures monthly bills from megacorps? Because said megacorps will swap out to a competitor immediately, so your subsidy is unlikely to lead to loyalty or anything.

If Anthropic and OpenAI are subsidizing the metered API usage, their model is going to end up just as successful as MoviePass. They are burning enough money on the training costs already.

dakolli 5 hours ago | parent [-]

Large companies are paying an arm and a leg, but I'm still certain even at $15.00 per million tokens they are not profitible.

If you have a machine running at 150 tok/ps you can only make $5820 a month at $15 per 1mm running 24/7. It costs a hell of a lot more than 6k a month to run Claude 4.7 @ 150 tok/ps on that machine 24/7.

This math is a bit off, because you have input tokens too, but regardless its still not profitable especially for how long it takes to turn around a request and the caching is probably not all that profitable.

NitpickLawyer 3 hours ago | parent [-]

You are all over this thread, but you have no idea how inference works, and it's obvious. Your napkin math is off because you don't know what to add up, you lack the necessary background. And yet you persist and reply all over this thread. I don't get it.

Serving models on dedicated hardware is not the same as your at home 150t/s thing. Inference is measured in thousands of tokens / s in aggregate (i.e. for all the sessions in parallel). That's how they make money.

CuriouslyC 5 hours ago | parent | prev [-]

Anthropic and OpenAI make money on API calls, margins have been reported in public filings. Subs are subsidized.

dakolli 5 hours ago | parent [-]

That's not possible, read my comment above. These are private companies, there are no public filings regarding their profitability in any sense. You're just making things up.

If you have a machine running at 150 tok/ps you can only make $5820 a month at $15 per 1mm running 24/7. It costs a hell of a lot more than 6k a month to run Claude 4.7 @ 150 tok/ps on that machine 24/7.

This math is a bit off, because you have input tokens too, but regardless its still not profitable especially for how long it takes to turn around a request and the caching is probably not all that profitable.

mtone 4 hours ago | parent [-]

You're forgetting a critical factor: concurrency. If a given hardware serves a single request at 150 tokens/s, it can also serve 20-30 requests at 100 tokens/s. Suddenly your $5K becomes $100K/month, enough to recoup the cost of the hardware in a year or so.

The reason it works: each time you read the model (memory bound) to calculate the next token, you can also update multiple requests (compute bound) while at it. It's also much more energy-efficient per token.

[1] https://aimultiple.com/gpu-benchmark

dakolli 3 hours ago | parent [-]

Interesting I didn't know about this, but it makes sense after reading the article. They are benchmarking on a single GPU on a 20bb param model. Does it scale across 60 H100s over NVLink/NVSwitch. I would be interested to see those benchmarks.

The idea that everyone is spinning up a $2 million in GPUs to scan their email inbox, search the web or avoid learning something is still ridiculous to me regardless.