Remix.run Logo
abathologist 7 days ago

One clever ingredient in OpenAI's secret sauce is billions of dollars of losses. About $5 billion dollars lost in 2024. https://www.cnbc.com/2024/09/27/openai-sees-5-billion-loss-t...

throwmeaway222 7 days ago | parent | next [-]

That's all different now with agentic which was not really a big thing until the end of 2024. before they were doing 1 request, now they're doing hundreds for a given task. the reason oai/azure win over locally run models is the parallelization that you can do with a thinking agent. simultaneous processing of multiple steps.

DoctorOetker 6 days ago | parent | prev | next [-]

Due to batching, inference is profitable, very profitable.

Yet undoubtedly they are making what is declared a loss.

But is it really a loss?

If you buy an asset, is that automatically a loss? or is it an investment?

By "running at a loss" one can build a huge dataset, to stay in the running.

dbbk 4 days ago | parent [-]

How batched can it really be though if every request is personalised to the user with Memory?

DoctorOetker 4 days ago | parent [-]

Imagine pipelineing lots of infra-scale GPU's, naive inference would need all previous tokens to be shifted "left" or from the append-head to the end-of-memory "tail", which would require a huge amount of data flow for the whole KV cache etc. Instead of calling GPU 1 the end-of-memory and GPU N the append-head, you keep the data static and let the role rotate like a circular buffer. So now for each new token inference round, the previous rounds end-of-memory GPU becomes the new append-head GPU. The highest bandwidth is keeping data static.

nickpsecurity 7 days ago | parent | prev | next [-]

You hit the nail on the head. Just gotta add the up to $10 billion investment from Microsoft to cover pretraining, R&D, and inference. Then, they still lost billions.

One can serve a lot if models if allowed to burn through over a billion dollars with no profit requirement. Classic, VC-style, growth-focused capitalism with an unusual, business structure.

gregoriol 7 days ago | parent | prev | next [-]

With infinite resources, you can serve infinite users. Until it's gone.

93po 7 days ago | parent | prev [-]

they would be break-even if all they did was serve existing models and got rid of everything related to R&D

mperham 7 days ago | parent | next [-]

Have they considered replacing their engineers with AI?

Invictus0 7 days ago | parent | prev | next [-]

An AI lab with no R&D. Truly a hacker news moment

nl 7 days ago | parent | next [-]

The unspoken context there is that the inference isn't the thing causing the losses.

gitremote 7 days ago | parent [-]

Inference contributes to their losses. In January 2025, Altman admitted they are losing money on Pro subscriptions, because people are using it more than they expected (sending more inference requests per month than would be offset by the monthly revenue).

https://xcancel.com/sama/status/1876104315296968813

aurareturn 7 days ago | parent [-]

So people find more value than they thought so they'll just up the price. Meanwhile, they still make more money per inference than they lose.

oblio 7 days ago | parent | next [-]

This assumes that the value obtained by customers is high enough to cover any possible actual cost.

Many current AI uses are low value things or one time things (for example CV generation, which is killing online hiring).

aurareturn 7 days ago | parent [-]

  Many current AI uses are low value things or one time things (for example CV generation, which is killing online hiring).
We are talking about Pro subs who have high usage.
oblio 7 days ago | parent [-]

True.

At the end of the day, until at least one of the big providers gives us balance sheet numbers, we don't know where they stand. My current bet is that they're losing money whichever way you dice it.

The hope being as usual that costs go down and the market share gained makes up for it. At which point I wouldn't be shocked by pro licenses running into the several hundred bucks per month.

gitremote 7 days ago | parent | prev [-]

Currently, they lose more money per inference than they make for Pro subscriptions, because they are essentially renting out their service each month instead of charging for usage (per token).

aurareturn 7 days ago | parent [-]

Do you have a source for that?

gitremote 6 days ago | parent [-]

When an end user asks ChatGPT a question, the chatbot application sends the system prompt, user prompt, and context as input tokens to an inference API, and the LLM generates output tokens for the inference API response.

GPT API inference cost (for developers) is per token (sum of input tokens, cached input tokens, and output tokens per 1M used).

https://openai.com/api/pricing/

https://azure.microsoft.com/en-us/pricing/details/cognitive-...

(Inference cost is charged per token even for free models like Meta LLaMa and DeepSeek-R1 on Amazon Bedrock. https://aws.amazon.com/bedrock/pricing/ )

ChatGPT Pro subscription pricing (the chatbot for end users) is $200/month

https://openai.com/chatgpt/pricing/

"insane thing: we are currently losing money on openai pro subscriptions!

people use it much more than we expected."

- Sam Altman, January 6, 2025

https://xcancel.com/sama/status/1876104315296968813

Again, this means that the average ChatGPT Pro end user's chattiness cost OpenAI too much inference (too many input and output tokens sent and received, respectively, for inference) per month than would be balanced out by OpenAI receiving $200/month in revenue from the average Pro user.

The analogy is like Netflix losing money on their subscriptions because their users watch too much streaming, so they ban account sharing, causing many users to cancel their subscriptions, but this actually helps them become profitable, because the extra users using their service too much generated more costs than revenue.

hn92726819 7 days ago | parent | prev [-]

I think you maybe have misunderstood the parent (or maybe I did?). They're saying you can't compare an individual's cost to run a model against OpenAI's cost to run it + R&D. Individuals aren't paying for R&D, and that's where most of the cost is.

TheAlchemist 7 days ago | parent | prev | next [-]

Would you have any numbers to back it up ?

knowitnone2 7 days ago | parent | prev [-]

they are not the only player so getting rid of R&D would be suicide

Lionga 7 days ago | parent [-]

It is now 3 years in where I was told AI will replace engineers in 6 month. How come all the AI companies have not replaced engineers?