Remix.run Logo
aurareturn 2 days ago

$2b/month which is $24b/year. Not as much as I expected considering they were at $20b by end of 2025.[0] They only added $4b since?

Anthropic had $19b by end of February 2026 and they added $6b in February alone.[1] This means if they added another $6b in March, they're higher than OpenAI already.

However, I heard that OpenAI and Anthropic report revenue in a different way. OpenAI takes 20% of revenue from Azure sales and reports revenue on that 20%. Anthropic reports all revenue, including AWS's share.[2]

[0]https://www.reuters.com/business/openai-cfo-says-annualized-...

[1]https://finance.yahoo.com/news/anthropic-arr-surges-19-billi...

[2]https://x.com/EthanChoi7/status/2036638459868385394

manquer 2 days ago | parent | next [-]

They aren't reporting anything yet. What we hearing is just from news media who get their leaks/info from investors who get some form of IR reports/ presentation.

Both will do public reporting only when they IPO[4] and have regulatory requirement to do so every quarter. For private companies[1] reporting to investors there are no fixed rules really[3]

Even for public companies, there is fair amount of leeway on how GAAP[2]expects recognize revenue. The two ways you highlight is how you account for GMV- Gross Merchandise Value.

The operating margin becomes very less so multiples on absolute revenue gets impacted when you consider GMV as revenue.

For example if you consider GMV in revenue then AMZN only trades at ~3x ($2.25T/$~800B )to say MSFT($2.75T/$300B) and GOOG ($3.4T/$400B) who both trade at 9x their revenue.

While roughly similar in maturity, size, growth potential and even large overlap of directly competing businesses, there is huge (3x / 9x) difference because AMZN's number includes with GMV in retail that GOOG and MSFT do not have in same size in theirs.

---

[1] There are still a lot of rules reporting to IRS and other government entities, but that information we (and news media) get is from investors not leaks from government reporting - which would be typically be private and illegal to disclose to public.

[2] And the Big 4 who sign off on the audit for companies prefer to account for it.

[3] As long as it is not explicit fraud or cooking the books, i.e. they are transparent about their methods.

[4] Strictly this would be covered in the prospectus(S-1) few weeks before going public and that is first real look we get into the details.

SilverElfin 2 days ago | parent | next [-]

Does the GAAP accounting matter if everyone passively buys shares due to the new fast entry rules, which corruptly will force us all to buy into these companies? The fundamentals and true value seem less relevant than ever:

https://www.benzinga.com/markets/tech/26/03/51248353/michael...

throwaway2037 2 days ago | parent | next [-]

For other readers, I want to add some context here. NASDAQ is pondering whether or not to change their NASDAQ 100 index membership rules for IPOs. Currently, there is a three month waiting rule for IPOs. They are proposing (not sure if passed/agree/completed yet) to remove this waiting rule for IPOs.

Real question: What is the real impact of this rule change? To me, it seems so minor. Three months is just a blip in time for any long term investor.

    > which corruptly will force us all to buy into these companies
Why is this "corrupt"? That term makes no sense here.

Also, if you don't like the NASDAQ 100 rules, then you don't have to invest in securities that track it. You can trade the basket yourself minus the names that you don't like.

Finally, I would say that S&P 500 index is far more important than NASDAQ 100. To join the S&P 500 index, the name must be profitable for the most recent year. (four quarters). Recall that Uber IPO'd in 2019, but was not profitable until 2023. OpenAI probably will not be profitable when it goes public; thus, it will not join the S&P 500 immediately.

I think the bigger story is SpaceX. It will likely IPO very close to a 1T USD market cap (with a small float: ~10%). And, thanks to StarLink, I assume that SpaceX is now wildly profitable.

nixon_why69 2 days ago | parent | next [-]

The "corruption" allegation is that for, yes, SpaceX, index funds will effectively be "forced" to buy in right away at their IPO price, rather than seeing where they settle before getting the money in. Given that most people have most of their money in index funds, it's sort-of an automatic buy and raises some hackles about a fixed game.

Saying "you can trade the basket yourself minus the names you don't like" is not a real counterargument. Most of us are not going to do that, I'm not going to do that and I'm writing this post right now. John Doe is certainly not doing that.

cluckindan 20 hours ago | parent [-]

”Given that most people have most of their money in index funds”

”Most people” is doing a lot of heavy lifting there; 52% in the US and just 25-30% globally invest their money

Fripplebubby 2 days ago | parent | prev [-]

> Also, if you don't like the NASDAQ 100 rules, then you don't have to invest in securities that track it.

Isn't the idea with the indexes that they allow you to intentionally not take an activist position in the market? The exposure is not tied to any underlying market hypothesis. In other words, if we make people form a market hypothesis in order to decide whether or not to hold this index, it has failed in its purpose.

manquer 2 days ago | parent | prev | next [-]

Diluting the index entry rules, only devalues the index utility. When it becomes a bigger problem, other indices with higher quality controls will out compete the current ones and be used by asset managers seeking safety.

More likely than not, most of us are already holding stock in these companies one way or another. All the Mag 7 hold a major chunk of OAI and Anthropic stock anyway, slower entry does not make it less risky for us.

Even if the big tech companies did not hold any stock, they are still the biggest vendors and their own order books is hugely impacted by the AI demand from these two ( and others in this space), either way we are all in this together.

chronc6393 2 days ago | parent | next [-]

> When it becomes a bigger problem, other indices with higher quality controls will out compete the current ones and be used by asset managers seeking safety

Doubt it.

The world does not allow perfect competition.

JumpCrisscross a day ago | parent [-]

> world does not allow perfect competition

What does this have to do with anything?

Plenty of asset managers construct indices to save fees.

ml-anon 2 days ago | parent | prev | next [-]

lol imagine someone believing in the invisible hand of the free market in 2026

manquer 2 days ago | parent [-]

In the short term there are distortions and inefficiencies. It may feel like free market is done .

However in the long term, economics usually finds the most efficient way.

Maintaining inefficient structures like tariffs or monopolies becomes more and more expensive and eventually untenable and disruptions will occur.

farialima 2 days ago | parent [-]

In the long term we are all dead. (Keynes)

Really feels like 1928

minraws 2 days ago | parent | prev | next [-]

I personally find this is the correct solution, since indexes are over-inflated either way, this brings much needed sanity to the index. Your index is now worth much more or much less based on how you view the AI bubble and you are forced to understand and correct your forward looking investments accordingly.

Passive investments are good, but if taken too far as they clearly have been in the last decade they become a scam. Everyone is SIPing into it, and there is infinite liquidity. Until one big whale finally decides they are booking it, then all hell will break loose on the same damn day.

qotgalaxy 2 days ago | parent | prev [-]

[dead]

gloryjulio 2 days ago | parent | prev | next [-]

Yes gaap absolutely matters.

You can just choose not to play the accounting game, and only choose the ones that actually gaap viable as investment opportunities. For example mag7 - tesla are all relatively cheap when they dip.

Some times the best play is just not to play. If you think they are too risky, walk away. There are enough good oppotunities

throwaway2037 2 days ago | parent [-]

    > mag7 (minus) tesla are all relatively cheap when they dip
I asked ChatGPT for a list of Magnificent 7 stocks and their most recent price to earnings (PE) ratios.

    Company Ticker P/E Ratio
    Apple Inc. AAPL ~33
    Microsoft Corporation MSFT ~25
    Alphabet Inc. GOOGL ~29
    Amazon.com Inc. AMZN ~30
    NVIDIA Corporation NVDA ~38
    Meta Platforms Inc. META ~28
    Tesla Inc. TSLA ~378
In the last 50 years, I think the median PE ratio for S&P 500 index is about 15. Seven and below is considered rock bottom, and 30 and above is very high. These PE ratios look pretty damn high to me.

How much do these names need to "dip" for you to consider them cheap?

gloryjulio 2 days ago | parent [-]

There are a few things to consider if you are in the investment space:

- Growth rate: you can't compare them to the average single digit growth companies or dividend focused companies. Most of these tech companies revenue are still growing at double digit with good moat. Pe is a good measure but it's not absolute. If you believe they sustain their growth then it's a good bet. And you can choose not to buy in their growth stories too. At the end of the day investment is about judgement call

- History benchmark: some of their pe is at historical low. So they are actually cheaper than before.

- Pe ttm and forward pe: how much pe ttm are they at? how much forward pe are they projecting? If forward pe is significantly lower, that means the current analysts consensus is that they will grow in future

- Pe is the a number but it's not everything. You need to consider multiple things to decide if that's undervalued for you. It's highly subjective as different interpretations are common.

- This post is about if you want to play the gaap game with private tech companies. My point is that there are still many public companies that are cheap at certain point. You just need to be patient and be willing to research and wait. For example, meta at around 500 was a buy for me, but since then it has rebounded it's still good but not as undervalued as a few days ago

master-lincoln 2 days ago | parent | prev [-]

what would force you? I guess if you are a greedy bastard you might feel that way...

aurareturn 2 days ago | parent | prev [-]

  They aren't reporting anything yet. What we hearing is just from news media who get their leaks/info from investors who get some form of IR reports/ presentation.
The $24b figure is literally in OpenAI's announcement.

The $19b ARR and $6b added in Feb came directly from Anthropic CEO recently.

diatone 2 days ago | parent | next [-]

Until they’re using consistent methods of reporting those figures, they’re not comparable. Same as any other company pre vs post IPO

aurareturn 2 days ago | parent [-]

Was referring to this:

  What we hearing is just from news media who get their leaks/info from investors who get some form of IR reports/ presentation.
lelanthran 2 days ago | parent | prev | next [-]

> The $24b figure is literally in OpenAI's announcement.

And? That's not a legislated report; they can use whatever mechanism they want to, without disclosure, to produce numbers.

Lets wait until they are regulated as a public company, then their mechanism has to be both aligned with what legislation requires as well as clearly documented in their report.

seanhunter 2 days ago | parent [-]

> they can use whatever mechanism they want to, without disclosure, to produce numbers.

That would be fraud against whoever participated in this round, so no. Just because they aren't regulated doesn't mean they are literally free to do whatever they want to close the round.

lelanthran 2 days ago | parent | next [-]

> Just because they aren't regulated doesn't mean they are literally free to do whatever they want to close the round.

What makes you think their public announcements are aligned with what they give prospective investors?

seanhunter 2 days ago | parent [-]

The fact that in all the rounds I have been involved in all public announcements related to the round go through the legal team to check for possible material misstatements that could cause exactly this kind of problem.

lelanthran 2 days ago | parent [-]

> The fact that in all the rounds I have been involved in all public announcements related to the round go through the legal team

All public announcements go through the legal team, regardless of whether it's related to the round or not.

adgjlsfhk1 2 days ago | parent | prev [-]

it would be fraud only if they're also telling their investors the same numbers.

bandrami 2 days ago | parent | prev | next [-]

Announcing isn't reporting. Am I the only one old enough to remember Enron?

robonot 2 days ago | parent | prev | next [-]

True. That's reporting and they are also reporting numbers internally, which are getting leaked.

2 days ago | parent | prev | next [-]
[deleted]
manquer 2 days ago | parent | prev [-]

I am reminded of the "I declare bankruptcy" meme from the 2000's TV series Office.

When we say reporting it means there are statutory submissions with an auditor signing off, with legal liability. As the other reply referenced consequences for doing this incorrectly can be severe - Arthur Anderson is no more after all because of Enron.

A Press Release (of a private entity) does not have to satisfy this high bar.

Press release does mean no constraints, for public companies, disclosure of important information by officers and other insiders have strong controls. Even if its the just a rocket/poop emoji on a casual social media platform. Lawyers have to refile with the SEC in the expected format. Even private companies have restrictions on not claiming things fraudulently to investors, but these are accredited investors with lesser controls than retail.

maerF0x0 2 days ago | parent | prev | next [-]

30x revenues at 17% revenue growth is... aggressive.

jsnell 2 days ago | parent | next [-]

Except it's not 100x revenues, and it's not 17% growth. I don't know where you got those numbers from?

The numbers OpenAI gave in the post would mean a 30x multiple pre-money. And the $20B -> $24B run-rate growth since the start of the year could plausibly mean anything from 110% to 200% annualized growth rate, depending on whether that happened over two or three months. The $24B is a lower bound as well, since they only gave use one significant digit for the monthly revenue.

maerF0x0 2 days ago | parent [-]

You're right, I was thinking about 100x revenues and forgot to confirm the math. Updated to reflect your point. ChatGPT itself provided the 17% number (it's most recently available growth rate)...

YetAnotherNick a day ago | parent | prev [-]

> 17% revenue growth

I think ads is going to massively change this number.

natas 2 days ago | parent | prev | next [-]

OpenAI is a few years behind Anthropic, and it's unlikely they'll catch up at this point.

mrklol 2 days ago | parent | next [-]

Where exactly are they behind?

PunchTornado 2 days ago | parent [-]

everywhere, but most important in ethics

serf 2 days ago | parent [-]

your ethics.

let's not forget that these major LLMs are all the children of corporate hyper-piracy en masse, none of them are ethical even in origin unless you're talking about the pre-product company charter kind of ethics, like google .

PunchTornado 2 days ago | parent | next [-]

You can't put anthropic and openai in the same basket regarding ethics. One accepted Department of War's conditions and the other not.

boppo1 a day ago | parent | next [-]

Last I heard, claude was the model powering maven when it bombed that school. Most aren't up-to date on that because anthropic launders their culpability through palanntir. Anthropic is better at optics not ethics.

PunchTornado 12 hours ago | parent [-]

No matter what you say, you know yourself the truth that the DoW wanted to go over the red lines of anthropic and they said no, while openai said yes. This is as clear as day to everyone and you are just lying yourself to believe something else.

Teelo 2 days ago | parent | prev [-]

How is anthropic training their models? Surely they're not using other people's work without their permission, right?

Agentus 2 days ago | parent | prev [-]

What origins of ethics?

You use the term piracy, which potentially hints at ur biases.

American IP laws aren’t universal, and last I checked neither is it popular in Silicon Valley.

Institutions surrounding dealing with IP Piracy is an American strong arm attempt to own the unownable and to use Russel conjugates to make the flagrant attempt seem just.

muskstinks 2 days ago | parent | prev | next [-]

I'm following this very closly and i'm stunned. Any infos on why you think they are behind antrophic in years?

I do see less quality from reasoning at chatgpt compared to Gemini but otherwise i'm not seeing a year or years gap.

baq 2 days ago | parent | prev | next [-]

They’re about even in general, but for me OpenAI is slightly or significantly ahead in the areas I care about the most. E.g. claude code is a backend slop cannon if you don’t tell codex/gemini to review the outputs.

empath75 2 days ago | parent | prev [-]

Anthropic is _unquestionably_ ahead product wise because of their agentic coding tools, but they are not _years_ ahead. In particular, their advantage is in the harness, which is not hard to replicate!

rafaelmn 2 days ago | parent | next [-]

Lol if CC is the advantage that's the larges indictment of AI coding there is. Don't get me wrong CC gives me good results, but I very much doubt their tooling is great, they just spew tokens at the model and the model is quite good at making sense of it and following through.

I suspect they have better RL setup for coding that makes their models better at coding than GPT/Gemini in practice.

0xy 2 days ago | parent | prev [-]

It's not just the CC harness. The models are fundamentally better.

troupo 2 days ago | parent | prev | next [-]

And that is revenue only. In the past 15 or so years most US companies (and especially startups) always talk about revenue only. Wheras only profit should matter.

E.g. what good is 20 billion per year when "OpenAI is targeting roughly $600 billion in total compute spending through 2030". That is $150 billion per year?

muzani 2 days ago | parent | next [-]

The startup game is about building assets and then cashing out on them during exit.

Assets are harder to measure. Facebook used to say something silly like every user was worth $100. That sounded ridiculous for a completely free app but over a decade later, the company is worth more than that. Revenue is an easier way of measuring assets than profit.

Profit doesn't really matter. It gets taxed. But it's not about dodging taxes; it's because sitting on a pile of money is inefficient. They can hire people. They can buy hardware. They can give discounts to users with high CLTV. They can acquire instead of building. It's healthy to have profit close to $0, if not slightly negative. If revenues fall or costs increase, they can make up for the difference by just firing people or cutting unprofitable projects.

Also when they're raising money, it makes absolutely no sense to be profitable. If they were profitable, why would they raise money? Just use the profits.

aurareturn 2 days ago | parent | prev | next [-]

It's not as much as you think. Google is spending $185b on data centers this year alone. Amazon is spending $200b this year. Total capex for big tech is ~$700b in 2026 and we're not including neo clouds, Chinese clouds, and other sovereign data centers.

Since everyone is trying to get compute from anywhere they can, including OpenAI going to Google, it's hard to tell what is used internally vs externally.

For example, it's entirely possible that Google's internal roadmap for Gemini sees it using $600b of compute through 2030 as well. In that case, OpenAI needs to match since compute is revenue.

hvb2 2 days ago | parent [-]

But if Gemini doesn't end up using the compute because of whatever reason, Google has other ways to monetize that compute. OpenAI doesn't?

So the same money spent by OpenAI and Google doesn't carry nearly the same amount of risk?

aurareturn 2 days ago | parent [-]

  OpenAI doesn't?
Why not? They've openly said they could in theory sell compute to others if they can't use it all.
hvb2 2 days ago | parent | next [-]

And who would be buying this from them? Let's say you're anthropic, would you give money to your competitor?

I'll also add that Google is already a player in that space so more likely to easily sell it off.

adgjlsfhk1 2 days ago | parent | prev [-]

this isn't credible though. them not being able to use all their compute likely means that the ai bubble has popped, so they won't be getting a good price on it.

Swizec 2 days ago | parent | prev | next [-]

> Wheras only profit should matter

Profit is money you couldn’t figure out how to spend. During growth, you want positive operating margins with nominal profits. When the company/market matures, you want pure profits because shareholders like money. If you can find a way to invest those profits in new areas of growth, that’s better.

troupo 2 days ago | parent | next [-]

> Profit is money you couldn’t figure out how to spend.

Profit is the money showing your business is sustainable. Ever since the ZIRP era US companies keep haemorrhaging money at a rate that is physically impossible to recoup.

If OpenAI plans to lose 100+ billion dollars per year for half a decade, what profits are you talking about to offset the losses?

> When the company/market matures, you want pure profits because shareholders like money.

Ah yes. Shareholders like money. And not, you know, basic accounting like "we need money to actually pay salaries, pay for equipment and offices etc. without perpetually relying on seeming endless investor money".

chronc6393 2 days ago | parent | next [-]

> what profits are you talking about to offset the losses?

You don’t need profit to offset the losses.

You can simply reduce spending / expenses.

CraigRood 2 days ago | parent | next [-]

In principle yes, but all metrics so far suggest they are losing money every user interaction. There is very little network effect with these tools so It's not like they can start cutting back on staff and feature deployment.

LaGrange 2 days ago | parent | prev [-]

lol that’s a line so incredibly naive it hurts.

One does not “simply” reduce spending.

chronc6393 2 days ago | parent [-]

> One does not “simply” reduce spending.

Why does stock price go up after mass layoffs?

bumby 2 days ago | parent | next [-]

What happens when the only way to reduce spending is to reduce your assets? Seems like circular logic at that point. I suppose the market isn’t expected to be rational all the time, but eventually it is.

justsomehnguy 2 days ago | parent | prev [-]

By your logic any company should just layoff everyone and profit on the stock price going to the infinity.

Company would no longer function of course but why it would matters if the stock price is through the Moon?

Swizec 2 days ago | parent | prev [-]

> Profit is the money showing your business is sustainable.

Notice I said you should have nominal profits.

> Ah yes. Shareholders like money. And not, you know, basic accounting like "we need money to actually pay salaries, pay for equipment and offices etc. without perpetually relying on seeming endless investor money".

All of these are costs that reduce your profits.

A maximally profitable business fires all employees except shareholders, closes every office, stops all RnD, and leases IP or real estate to others on long-term deals that never need to be renegotiated.

badpun 2 days ago | parent | prev | next [-]

A lot of investments gets amortized over many years so even if you're investing all your free cash you'll still show a lot of profit.

aurareturn 2 days ago | parent | prev [-]

Not sure why you’re downvoted.

Everyone wants to treat OpenAI like a car wash business where they need to make a profit almost immediately. I don’t know why people can’t understand that the industry is in a rapid growth stage and investing the money is more important than making a profit now. The profits will come later.

troupo 2 days ago | parent | next [-]

"Profits will come later" https://news.ycombinator.com/item?id=47597480

nutjob2 2 days ago | parent | prev | next [-]

> The profits will come later.

The nearly $1T hand wave. Forgive me if I ask how. Might give it some credence if Anthropic and Google weren't pulling even with or surpassing them in various way or markets.

Whats worse is they mostly seem to have retail market name recognition which is arguably the hardest, or maybe the impossible market to make money from.

aurareturn 2 days ago | parent [-]

  Whats worse is they mostly seem to have retail market name recognition which is arguably the hardest, or maybe the impossible market to make money from.
That doesn't seem to be the case at all. Meta and Google are two of the most profitable companies in history, off the backs of free users.

Apple is another one that focuses almost exclusively on retail and is also one of the most profitable in history.

FatherOfCurses 2 days ago | parent | prev [-]

> profits will come later

Holy crap, is it the year 2000 again?

aurareturn 4 hours ago | parent [-]

2000s, 2010s, and 2020s. This is how tech companies work, especially in a new industry.

pier25 2 days ago | parent | prev | next [-]

Give me a billion and I'll have 500M of revenue in no time by selling dollars at 50 cents.

aurareturn 2 days ago | parent [-]

Why are we treating OpenAI and Anthropic differently than say, Amazon or Uber? Both companies invested in growth for many years before making a profit. Most tech companies in the last 2-3 decades lost money for years before making a profit.

Why are we saying that OpenAI and Anthropic can't do the same?

lmm 2 days ago | parent | next [-]

Amazon had a clear business model. They had positive gross margin from, if not day 1, then pretty close to it.

I remain skeptical of Uber.

Sure, maybe OpenAI and Anthropic will make it work. It's not impossible. But it's far from guaranteed.

aurareturn 2 days ago | parent [-]

OpenAI and Anthropic have positive gross margins for inference.

Uber generates about $1b in profit yearly now.

lmm 2 days ago | parent [-]

> OpenAI and Anthropic have positive gross margins for inference.

Maybe, if you take their word for it, and treat the models as capital assets rather than part of the COGS for the inference product. That's pretty far off from where Amazon was at.

hirako2000 2 days ago | parent | prev | next [-]

Two reasons. They somewhat broke even, and kept getting investment. The potential for quasi monopoly was obvious.

Openai can't claim either.

aurareturn 2 days ago | parent | next [-]

How did Uber somewhat break even? They lost $34b before making a profit.

Uber was only on a path to monopoly in the US, not world wide. It’s lost to local competitors in most countries. And it can get disrupted by self driving cars soon.

OpenAI’s SOTA LLM training smells like a natural monopoly or duopoly to me. The cost to train the smartest models keep increasing. Most competitors will bow out as they do not have the revenue to keep competing. You can already see this with a few labs looking for a niche instead of competing head on with Anthropic and OpenAI.

vlovich123 2 days ago | parent | next [-]

The cost of copying SOTA models though is super cheap and doesn’t take super long.

aurareturn 2 days ago | parent [-]

How do you distill when OpenAI and Anthropic inevitably move to tasks running in the cloud? IE. Go buy this extremely hard to get concert ticket for me.

Distilling might only be effective in the chat bot dominant era. We are about to move to an agents era.

Furthermore, I’m guessing distilling will get harder and harder. Claude Code leak shows some primitive anti distilling methods already. There’s research showing that models know when it’s being benchmarked. Who’s to say Anthropic and OpenAI aren’t able to detect when their models are being distilled?

adgjlsfhk1 2 days ago | parent [-]

even ignoring distillation, so long as hardware or ml get better over time, training a new model from scratch is cheaper the later you do it

aurareturn 4 hours ago | parent | next [-]

If hardware gets better over time, they also get better for OpenAI.

ef3dfd 2 days ago | parent | prev [-]

Yep the poster is assuming efficiencies will not come.

Absolutely they will. And this is a huge problem for OAI - given Google is targeting vertical integration, they will acquire a cost-advantage. As long as the model performance is good enough, they will kick OAI and Anthropic out in the long-run.

The valuations of OAI and Anthropic are nonsense. A true valuation would incorporate failure risk, which is natural for startups/fast growing and money losing firms. Anyone who takes them serious is incredibly delusional.

dionidium 2 days ago | parent | prev [-]

> How did Uber somewhat break even? They lost $34b before making a profit.

It took them ~14 years to lose that $34 billion. Some projections suggest that OpenAI has lost a third of that in a single quarter. Even the most optimistic projections indicate that they're losing that much every 2-3 years. There's talk that they might lose ~$150B before profitability.

These are just numbers on a page to regular people, but $34 billion and $150 billion are very different numbers.

aurareturn 4 hours ago | parent [-]

Uber is a taxi company hoping for a monopoly in the US. OpenAI is a software company hoping for a monopoly in many counties.

outside1234 2 days ago | parent | prev [-]

Worse, Google can afford to outspend them in this game and basically run them both out of money.

windward 2 days ago | parent | prev | next [-]

>Most tech companies in the last 2-3 decades lost money for years

Yes

>before making a profit.

No

2 days ago | parent | prev | next [-]
[deleted]
troupo 2 days ago | parent | prev | next [-]

> Why are we treating OpenAI and Anthropic differently than say, Amazon or Uber?

The dame Uber that lost close to 30 billion dollars over 10 years to subsidize its price dumping?

No, no we are not treating OpenAI differently than Uber

pier25 2 days ago | parent | prev | next [-]

It's not even remotely comparable. Uber burnt some $30B over a decade or so.

aurareturn 2 days ago | parent [-]

It seems like it is comparable based on what you just said.

mrweasel 2 days ago | parent [-]

OpenAI have burned nearly 25 times what Uber did, it has more competitors, billions of dollars in obligations and no clear way to profitability.

The problem for OpenAI is that the cost of getting them where they are now has been to high and competitors can now establish themselves for much less money.

2 days ago | parent | prev | next [-]
[deleted]
Forgeties79 2 days ago | parent | prev [-]

[dead]

merlindru 2 days ago | parent | prev | next [-]

why should only profits matter? if i had a killer product today that i just need to sell tomorrow, wouldn't you still invest today knowing i'll probably only start to make money tomorrow (or perhaps next week)?

the expectation is that they'll eventually make money. they can't raise forever. only startups are not profitable for a few years. but most companies that have existed for a long while have been profitable

and since they're expected to make a LOT of money, everyone wants a piece of that future pie, pushing up the valuation and amount raised to admittedly somewhat delusional levels like here

bandrami 2 days ago | parent | next [-]

> why should only profits matter?

In this case because it's not clear that anybody has actually figured out how to sell inference for more than it costs

nl 2 days ago | parent [-]

It's well know everyone is making great money on inference. The cost is training.

Whether GPT-5 was profitable to run depends on which profit margin you’re talking about. If we subtract the cost of compute from revenue to calculate the gross margin (on an accounting basis),2 it seems to be about 30% — lower than the norm for software companies (where 60-80% is typical) but still higher than many industries.

(They go on to point out that there are other costs that might mean they didn't break even on other costs - although I suspect these costs should be partially amortized over the whole GPT 5.x series, not just 5.0)

https://epochai.substack.com/p/can-ai-companies-become-profi...

https://martinalderson.com/posts/are-openai-and-anthropic-re... (with math working backwards from GPU capacity)

"Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company"

https://simonwillison.net/2025/Aug/17/sam-altman/

"There’s a bright spot, however. OpenAI has gotten more efficient at serving paying users: Its compute margin—the revenue left after subtracting the cost of running AI models for those customers—was roughly 70% in October, an increase from about 52% at the end of last year and roughly 35% in January 2024."

https://archive.is/OqIny#selection-1279.0-1279.305 (Note this is after having to pay higher spot rates for compute because of higher than expected demand)

bandrami 2 days ago | parent [-]

> It's well know everyone is making great money on inference.

That is not, in fact, "well known", but based entirely on the announcements of the inference providers themselves who also get very cagey when asked to show their work and at least look like they're soliciting a constant firehose of investment money simply to keep the lights on. In particular there's a troubling tendency to call revenue "recurring" before it actually, you know, recurs.

nl 2 days ago | parent [-]

> based entirely on the announcements of the inference providers themselves who also get very cagey when asked to show their work

I mean sure, it's self reported.

But the inference prices somewhere like Fireworks or TogetherAI charges is comparable to what Google/AWS/Azure charge for the same model an we know they aren't losing money - they have public accounts that show it, eg:

https://au.finance.yahoo.com/news/wall-street-resets-amazon-...

Fireworks’ gross margin—gross profit as a percentage of revenue—is roughly 50%, according to the same person

https://archive.is/Y26lA#selection-1249.65-1249.173

> In particular there's a troubling tendency to call revenue "recurring" before it actually, you know, recurs.

If someone has a subscription then yes that is pretty normal.

bandrami 2 days ago | parent [-]

> If someone has a subscription then yes that is pretty normal.

Not if you've substantively changed rate limits 3 times in the last 5 months while still counting those forecast revenues. In most industries that's called rug-pulling.

baq 2 days ago | parent [-]

It doesn’t matter how you call it. A recurring subscription on the books is a recurring subscription. Yes you can cancel anytime (how generous of them), it also doesn’t matter.

Barrin92 2 days ago | parent | prev [-]

not if your product is selling two dollars for one dollar and as soon as you'll start to charge more I'll switch to one of your twenty competitors

profit isn't a function of having a killer product, it's a function of having no competition

aurareturn 2 days ago | parent | next [-]

And why do you think twenty competitors can stay competitive for years to come?

Industries always consolidate and winners emerge. SOTA LLMs look like a natural monopoly or duopoly to me because the cost to train the next model keeps going up such that it won't make sense for 20 competitors to compete at the very high end.

TSMC is a perfect example of this. Fab costs double every 4 years (Rock’s Law). It's almost impossible to compete against TSMC because no one has the customer base to generate enough revenue to build the next generation of fabs - except those who are propped up by governments such as Intel and Rapidus. Samsung is basically the SK government.

I don’t see how companies can catch OpenAI or Anthropic without the strong revenue growth.

harmonic18374 2 days ago | parent | next [-]

Google has already surpassed them both in all areas except coding. People on HN only look at benchmarks, but Gemini's multimodal understanding, things like identifying what a plant is, normal user use cases (other than chatting), integration with other tools, is much better.

It's believable that Meta, ByteDance, etc. can catch up too. It is not certain that scaling will meaningfully increase performance indefinitely, and if it stops soon, they surely will. Furthermore, other market conditions (US political instability) can enable even more labs, like Mistral, to serve as compelling alternatives.

Uber, TSMC, etc. have strong moats in the form of physical goods and factories. LLMs have nothing even remotely comparable. The main moat is in knowledge, which is easy to transfer between labs. Do you think all the money that goes into training a model goes into the actual final training run? No, it is mostly experiments and failed ideas, which do not have to be repeated by future labs and offshoots.

otabdeveloper4 2 days ago | parent [-]

> It is not certain that scaling will meaningfully increase performance indefinitely

It's certain that it won't. We've already hit diminishing returns.

outside1234 2 days ago | parent | prev | next [-]

Google has completely caught OpenAI. Anthropic has a better coding model, but I'm sure Google is working on that too.

baq 2 days ago | parent [-]

> Anthropic has a better coding model

I’ll be polite and call this statement ‘a very debatable’ one.

Barrin92 2 days ago | parent | prev | next [-]

>Industries always consolidate and winners emerge.

no, most industries just sell boring generic products, a few industries favor monopolists. Semiconductors are one of them but LLMs are also as far removed from that business as is physically possible.

TSMC makes the most complicated machines humans have ever built, a LLM requires a few dozen nerds, a power plant, a few thousand lines of python and chips. That's why if you're Elon Musk you could buy all of the above and train yourself an LLM in a month.

LLMs are comically simple pieces of software, they're just big. But anyone with a billion dollars can have one, they're all going to be commoditized and free in due time, like search. Copying a lithography machine is difficult, copying software is easy. that's why Google burrowed itself into email, and browsers, and your phone's OS. Problem for openai is they don't have any of that, there's already half a dozen companies that, for 99% of people, do what they do.

komali2 2 days ago | parent | prev [-]

The barrier to replicating TSMC isn't just cost, it's supply chain, geopolitics, and talent.

Only one company on Earth can make the UV lithography machines TSMC buys for their highest end fabs, and they're not selling to anyone else.

The PRC tried to brute force this supply chain backed by the full might of the Party's blank check, all red tape cut, literally the best possible duplication scenario, and they failed.

purpleidea 2 days ago | parent | next [-]

The PRC didn't fail, they haven't finished succeeding yet.

baq 2 days ago | parent | prev [-]

They will succeed eventually since they have proof it’s possible and their plans span decades. I expect them to have working EUV in 10 years. Whether it’ll still be bleeding edge tech is a different question I dare not guess the answer to.

ds2df 2 days ago | parent | prev | next [-]

no competition is a bit extreme. Limited competition yes due to competitive advantages.

susupro1 2 days ago | parent | prev [-]

[dead]

nl 2 days ago | parent | prev [-]

What is the point - exactly - of profit?

Profit is money you can't find a use for to grow your business, so you give some of it to the government in the form of tax.

Also there is a big difference between operational expenses and capital expenses like building data centers.

I think OpenAI is being very aggressive on the growth vs conservative financial management spectrum but just saying "only profit should matter" is just wrong.

bandrami 2 days ago | parent | next [-]

> What is the point - exactly - of profit?

It's what attracts capital investment, which businesses need

nl 2 days ago | parent [-]

OpenAI seems to do reasonably well at attracting capital investment without profits.

As did Amazon, Google, Meta etc etc.

bandrami 2 days ago | parent | next [-]

OpenAI is great at attracting people who say "yeah, sure, I'll give you capital at some point in the future" who then never actually give them the capital (or at least haven't yet).

nl 2 days ago | parent | next [-]

They seem to be spending lot of cash too...

2 days ago | parent | prev [-]
[deleted]
ngold 2 days ago | parent | prev [-]

If I remember correctly, Facebook took 10 years raising money before going ipo.

Could be wrong though.

troupo 2 days ago | parent | prev [-]

What's the point - exactly - of a company being sustainable?

nl 2 days ago | parent [-]

Being profitable isn't the same as sustainable.

Even a simple shop isn't profitable for months if it needs to buy stock up front, and run some ads to let people know about it. The money for that comes from the shop owners as an investment.

This is the same thing but on a slightly bigger scale, over a longer time frame.

troupo 2 days ago | parent [-]

If your shop is unprofitable for years with no chance to recoup any of the costs, you close it, as your investments run out, and investors and banks stop giving you money as you keep losing them.

US tech companies just continue operating because "revenue and growth".

nl 2 days ago | parent | next [-]

> US tech companies just continue operating because "revenue and growth".

US tech companies are some of the most profitable business in history.

Google made over $130B profit last year, Meta 60B.

I'm old enough to have had exactly the same arguments (on Slashdot for Google, here for FB) for both before their IPOs.

It's a uninformed argument and people should know better.

cindyllm 2 days ago | parent | prev [-]

[dead]

interludead 2 days ago | parent | prev | next [-]

I'd be pretty cautious comparing those numbers directly

dmix 2 days ago | parent | prev [-]

Still a huge amount of revenue for any company. Those $20/month fees are going to triple in a couple yrs. But the VCs expect much much more.

willio58 2 days ago | parent [-]

Friends of mine working in AI companies are saying we’ll be lucky if they only triple. More like 10-20x long term, especially for enterprise

riskable 2 days ago | parent | next [-]

This assumes that these companies aren't going to use smaller providers or hosting models themselves. THAT is the great big assumption going into all the Big AI funding.

I think it's a very, very bad assumption. After trying GLM-5 and Qwen3 on Ollama Cloud, not only were they faster than OpenAI's offerings (by a huge amount) it was just as good if not better at doing what I asked of it.

Claude Code is still superior to anything else but GLM-5 and Qwen3 are easily just as good as GPT-5.X (for coding).

mrweasel 2 days ago | parent | prev | next [-]

Oh, I read it as the number of subscribers would triple, but you're suggesting the price will?

That makes a little more sense, because the number of subscribers are so low that tripling won't really make much difference in terms of turning a profit.

Bombthecat 2 days ago | parent [-]

It's for companies to replace people. Works out ok for them. Even four times isn't that much

ef3dfd 2 days ago | parent [-]

Its simply not going to happen. People like Nadella call it 'tacit knowledge' - the reality is the work people do is much broader than what is producible by LLMs alone. Without the human, there is no work done. Unlike classic machinery, LLMs are not comparable in that you cant simply reduce labour input by X and be fine. Sure in the short term the consequences will not show up, but in the long term they will.

Altman and co. get down on their knees and pray that proposition is only transitory in the short run.

LLMs wont disappear, but they wont be large profit generators either. Especially not so whilst there is fierce competition and every dollar of profit is re-invested. The value of an asset is derived upon its potential cash return, net of reinvestment, taxes et al.

Altman is hoping to survive long enough to finance R&D to figure out how to encode the entirety of what humans do, to be able to come good on the asinine aspirations he has put forth that justify its valuation. But it will end in disaster.

Bombthecat 2 days ago | parent [-]

Of course there will be humans, just way less of them.

Instead of ten, you just need two or three

ef3dfd 2 days ago | parent [-]

You haven't put forward a compelling argument besides fluff.

This is so surface level and boring.

Most of you aren't really clued up on subject areas like Finance to talk about this stuff frankly. As long as a firm is beating its cost of capital, it will reinvest money to generate more growth. What does that mean? Oh. Hiring more people.

duped 2 days ago | parent | prev [-]

People working in AI companies are the last people I'd trust on price forecasting