Remix.run Logo
vessenes 4 hours ago

What do you value a company at that has gotten to $14b in revenue in 3 years and has 60%+ margin on inference? Just out of curiosity.

JackSlateur 4 hours ago | parent | next [-]

60%+ margin on inference: source ?

+ r&d costs

Of course, if one does not "pay" for investment, benefits are easily made ..

xvector 4 hours ago | parent | prev | next [-]

I am struggling with this because I have an Anthropic offer vs another equivalent offer that is all cash.

But project out forwards.

- What happens when Google builds a similar model? Or even Meta, as far behind as they are? They have more than Anthropic in cash flow to pour into these models.

- What happens when OSS is "enough" for most cases? Why would anyone pay 60% margins on inference?

What is Anthropic's moat? The UX is nice, but it can be copied. And other companies will have similarly intelligent models eventually. Margins will then be a race to the bottom, and the real winners will be GPU infra.

underyx 4 hours ago | parent | next [-]

If you have an offer, you can and should ask this question of whomever you're coordinating with. They will give you an honest answer.

jrjeksjd8d 4 hours ago | parent | prev | next [-]

I've been in this situation before. Anthropic has a stupid business model but the market can stay irrational longer than you can stay solvent. If you get in there you will be aligned with people who structurally do not lose.

fragmede 3 hours ago | parent | prev [-]

Big picture, sure. We can talk about the millions that corporations will make and who's going to do what. But you're a person. $1 million in options is probably meaningful for you. Companies aren't IPOing, but the secret is that they're still paying employees cash for their options. SpaceX employees have had what's called a tender, which means they get to sell some of their hypothetical SpaceX options for cold hard cash in the bank that you can use to pay your mortgage. There's zero guarantee that Anthropic will do such a thing before the bubble bursts, but if they do, and you're there, who cares about a software company moat when you have enough money to buy a castle in Napa and pay to have a real actual moat with water in it and crocodiles, if that's what you want.

Others are made of different stuff, and are going to go right back to work, even though they could go off to a beach for the rest of forever somewhere.

xvector 3 hours ago | parent [-]

> who cares about a software company moat when you have enough money to buy a castle in Napa and pay to have a real actual moat with water in it and crocodiles, if that's what you want.

Doesn't this require their private market valuations to go well into the trillions?

Hamuko 4 hours ago | parent | prev [-]

Is their overall margin also about 60% too? Or something saner like 30%?

lotsofpulp 4 hours ago | parent [-]

Their overall margin is negative.

vessenes 2 hours ago | parent [-]

No, it’s not. This is a dangerous perspective, usually held by engineers who think that accounting doesn’t matter and don’t understand it.

You MUST accrue the lifetime value of the assets against the capital expense (R&D in this case) to determine the answer to this question.

The company (until this announcement) had raised $17B and has a $14B revenue rate with 60% operating margin.

It is only negative on margin if you assume the prior 14B (e.g. Claude 4.6 plus whatever’s unreleased) will have no value in 24 months. In that case, well, they probably wasted money training.

If you think their growth rate will continue, then you must only believe the models have a useful 9 months or so life before they are break even.

Anthropic is, according to Dario, profitable on every model <<—- they have trained if you consider them individually. You would do best to think “will this pattern continue?”

somewhereoutth 27 minutes ago | parent [-]

Sorry - if a model costs (say) 20B to train, lasts 12 months before it becomes obsolete, generates 2B/month revenue, but with 1B/month inference costs, then it has lost 8B.

Or are you suggesting that in fact each model comes out ahead over its lifespan, and all this extra cash is needed because the next model is so much more costly to train that it is sucking up all the profits from the current, but this is ok because revenue is expected to also scale?