Remix.run Logo
simonw 4 hours ago

If that was genuinely happening here - Anthropic were selling inference for less than the power and data center costs needed to serve those tokens - it would indeed be a very bad sign for their health.

I don't think they're doing that.

Estimates I've seen have their inference margin at ~60% - there's one from Morgan Stanley in this article, for example: https://www.businessinsider.com/amazon-anthropic-billions-cl...

1shooner 4 hours ago | parent | next [-]

>The bank's analysts then assumed Anthropic gross profit margins of 60%, and estimated that 75% of related costs are spent on AWS cloud services.

Not estimate, assumption.

robotresearcher 2 hours ago | parent | next [-]

Those are estimates. Notice they didn’t assume 0% or a million %. They chose numbers that are a plausible approximation of the true unknown values, also known as an estimate.

simonw 3 hours ago | parent | prev [-]

If Morgan Stanley are willing to stake their credibility on an assumption I'm going to take that assumption seriously.

skywhopper 2 hours ago | parent | next [-]

This is pretty silly thing to say. Investment banks suffer zero reputational damage when their analysts get this sort of thing wrong. They don’t even have to care about accuracy because there will never be a way to even check this number, if anyone even wanted to go back and rate their assumptions, which also never happens.

simonw 2 hours ago | parent [-]

Fair enough. I was looking for a shortcut way of saying "I find this guess credible", see also: https://news.ycombinator.com/item?id=46126597

SiempreViernes 3 hours ago | parent | prev [-]

Calling this unmotivated assumption an "estimate" is just plain lying though, regardless of the faith uou have in the source of the assumption.

2 hours ago | parent | next [-]
[deleted]
simonw 2 hours ago | parent | prev [-]

I've seen a bunch of other estimates / claims of a %50-60 margin for Anthropic on serving. This was just the first one I found a credible-looking link I could drop into this discussion.

The best one is from the Information, but they're behind a paywall so not useful to link to. https://www.theinformation.com/articles/anthropic-projects-7...

viscanti 3 hours ago | parent | prev | next [-]

They had pretty drastic price cuts on Opus 4.5. It's possible they're now selling inference at a loss to gain market share, or at least that their margins are much lower. Dario claims that all their previous models were profitable (even after accounting for research costs), but it's unclear that there's a path to keeping their previous margins and expanding revenue as fast or faster than their costs (each model has been substantially more expensive than the previous model).

simonw 3 hours ago | parent | next [-]

It wouldn't surprise me if they found ways to reduce the cost of serving Opus 4.5. All of the model vendors have been consistently finding new optimizations over the last few years.

manmal 2 hours ago | parent | prev [-]

I sure hope serving Opus 4.5 at the current cost is sustainable. It’s the first model I can actually use for serious work.

verdverm 3 hours ago | parent | prev | next [-]

I've been wondering about this generally... Are the per-request API prices I'm paying at a profit or a loss? My billing would suggest they are not making a profit on the monthly fees (unless there are a bunch of enterprise accounts in group deals not being used, I am one of those I think)

bpavuk 4 hours ago | parent | prev | next [-]

but those AI/ML researchers aka LLM optimization staff are not cheap. their salaries have skyrocketed, and some are being fought for like top-tier soccer stars and actors/actresses

hollerith 4 hours ago | parent | prev [-]

The leaders of Anthropic, OpenAI and DeepMind all hope to create models that are much more powerful than the ones they have now.

A large portion of the many tens of billions of dollars they have at their disposal (OpenAI alone raised 40 billion in April) is probably going toward this ambition—basically a huge science experiment. For example, when an AI lab offers an individual researcher a $250 million pay package, it can only be because they hope that the researcher can help them with something very ambitious: there's no need to pay that much for a single employee to help them reduce the costs of serving the paying customers they have now.

The point is that you can be right that Anthropic is making money on the marginal new user of Claude, but Anthropic's investors might still get soaked if the huge science experiment does not bear fruit.

JumpCrisscross 4 hours ago | parent [-]

> their investors might still take a bath if the very-ambitious aspect of their operations do not bear fruit

Not really. If the technology stalls where it is, AI still have a sizable chunk of the dollars previously paid to coders, transcribers, translators and the like.