Remix.run Logo
og_kalu 18 hours ago

They have literally hundreds of millions of users that are completely free. Not google search or facebook free, but free free, and only suffer a few billion in losses. Inference is cheap and their unit economics is fine. There is literally no business that would be making profit under those constraints. If they need to make profit, they can implement ads and that will be that.

mgh95 18 hours ago | parent [-]

In 2024 (when customer mix was more favorable) they lost 5B on 10 in forward looking ARR.

They aren't pulling an Amazon snd balancing cash flow with costs. They're just incinerating money for a low value userbase. Even at FB arpu the economics are still very poor.

og_kalu 18 hours ago | parent [-]

>In 2024 (when customer mix was more favorable)

Okay, so still hundreds of millions of users

>They aren't pulling an Amazon snd balancing cash flow with costs.

Nobody said they were. I said having hundreds of millions of completely free users would suck the profitability of any business, and that the remedy would be simple, should the need for it arise.

>They're just incinerating money for a low value userbase.

If you don't see how implementing ads in a system designed for having natural conversations to users whose most common queries are for “Practical Guidance” and “Seeking Information” could be incredibly valuable then you have no foresight and I don't know what to tell you.

>Even at FB arpu the economics are still very poor.

No they aren't and I honestly have no idea what you're talking about. Inference is cheap and has been for some time.

mgh95 17 hours ago | parent [-]

I don't think you realize the issue. They aren't monetizing their SaaS product satisfactorily -- hence the Amazon cash flow imbalance statement. This indicates they must find new markets to survive. Despite this, however, they are gaining only in poorer markets, limiting the monetizability of a high cost product.

Implementing adds is a hail-mary. It puts them in a knife fight with google which will likely result in a race to the bottom which OpenAI cannot sustain and win.

FB global ARPU is about 50 USD. At 700M customers, they do 35B in revenue annually. This compares to a publicly stated expected cost of approximately 150B in computing alone over the next 5 years (see: https://fortune.com/2025/09/06/openai-spending-outlook-115-b...). This leaves a profit of 5B per year, with 90B expected r&d costs. Even if OpenAI develops a product and fires all employees, you are looking at a ROIC of about 18 years.

Fundamentally, OpenAI does not have the unit economics of a traditional SaaS. "Hundreds of millions of users" is hundreds of millions of people consuming expenses and not generating sufficient revenue to justify the line of business as a going concern. This, coupled with declining enterprise AI adoption (https://www.apolloacademy.com/ai-adoption-rate-trending-down...) paints an ugly picture.

ares623 12 hours ago | parent | next [-]

Facebook users spend multiple hours per day doomscrolling. Operational costs of a doomscrolling user is minimal. Most of it will be served from a CDN.

Imagine 700M users “doomchatting” with GPT5 for several hours per day to justify the ROI of advertising.

haijo2 17 hours ago | parent | prev | next [-]

V nice post. As a corporate finance and valuation enthusiast - I approve.

og_kalu 17 hours ago | parent | prev [-]

>Despite this, however, they are gaining only in poorer markets

They are gaining everywhere. Some more than others, but to say they are only gaining in poorer markets is blatantly untrue.

>FB global ARPU is about 50 USD. At 700M customers, they do 35B in revenue annually.

Yeah, and that would make them healthily profitable.

>This compares to a publicly stated expected cost of approximately 150B in computing alone over the next 5 years

Yes, because they expect to serve hundreds of millions to potentially billions more users. 'This leaves a profit of 5B per year' makes some very bizarre assumptions. You’re conflating a future-scale spending projection with today’s economics. That number is a forward-looking projection tied to massive scale - it doesn’t prove current users alone justify that spend, and they clearly don't. There is no reality where they are spending that much if their userbase stalls at today's numbers, so it's just a moot point and '5B per year' a made up number.

>Fundamentally, OpenAI does not have the unit economics of a traditional SaaS.

Again, Everything points to their unit economics being perfectly fine.

menaerus 8 hours ago | parent | next [-]

There's one thing you're missing - inference is not cheap. HW is not cheap. Electricity is not cheap. This is w/o R&D. They show that, in average, they recorded ~2.627B requests/day. This is ~79B requests/month or ~948B requests/year. And this is only for the consumer ChatGPT data, Enterprise isn't included AFAICT. Each request translates to the direct cost that could be even roughly estimated.

og_kalu 6 hours ago | parent [-]

No inference is pretty cheap, and a lot of things point to that being true.

- Prices of API access of Open models from third-party providers who would have no motive to subsidize inference

- Google says their median query is about as expensive as a google search

Thing is, what you're saying would have been true a few years ago. This would have all been intractable. But llm inference costs have quite literally been slashed several orders of magnitude in the last couple of years.

menaerus 5 hours ago | parent [-]

You would probably understand if you knew how LLMs are run in the first place but, as ignorant as you are (sorry), I have no interest in debating this with you anymore. I tried to give a tractable clue which you unfortunately chose to counter-argue with non-facts.

og_kalu 5 hours ago | parent [-]

Touting the requests per day is pretty meaningless without per query numbers, but sure, I'm the one that doesn't understand. What people with no incentive to subsidize are charging is about as fact as it comes but sure, lol.

I've replied to you once man. Feel free to disengage but let's not act like this has been some ongoing debate. No need to make up stories.

menaerus 4 hours ago | parent [-]

Which is why I said that it can be roughly estimated. And it can be roughly estimated even without those numbers assuming a fleet of some size X and assuming the number of hours this fleet is utilized per day, for the whole year. Either way, you will end up with a hefty number. Do the math and you'll see that inference is far from being cheap.

og_kalu 4 hours ago | parent [-]

Of course all the requests is a hefty number, they're serving nearly a billion weekly active users. What else would you expect ? Google search, Facebook - those would all be hefty numbers. The point is that inference is pretty cheap per user, so when they get around to implementing ads, they'll be profitable.

Again, there are many indicators that inference per user is cheap. Even the sheer fact that Open AI closed 2024 serving hundreds of millions of users and lost 'only' 5B is a pretty big clue that inference is not that expensive.

menaerus 8 minutes ago | parent [-]

Nonsense but I digress.

mgh95 15 hours ago | parent | prev [-]

No, the economics are horrible. At current 30Y T-bond rates, your money doubles ever ~15 years. Your money grows faster in USD treasuries then OpenAI. That's disastrous.

og_kalu 6 hours ago | parent | next [-]

You're moving the post now. Stop making up numbers and substituting them as fact. What you think Open AI's returns will be is your opinion, and not a reason to justify 'poor unit economics'.

simianwords 8 hours ago | parent | prev [-]

Why do you think the economics are horrible?

mangamadaiyan 7 hours ago | parent [-]

They've explained why. Now why do you think the economics are not horrible?

simianwords 6 hours ago | parent [-]

I have the same question as my sibling comment. Where did they get the numbers?