Remix.run Logo
simonw 2 hours ago

There exist a large number of people who are absolutely convinced that LLM providers are all running inference at a loss in order to capture the market and will drive the prices up sky high as soon as everyone is hooked.

I think this is often a mental excuse for continuing to avoid engaging with this tech, in the hope that it will all go away.

kingstnap 2 hours ago | parent | next [-]

I agree with you, but also the APIs are proper expensive to be fair.

What people probably get messed up on as being the loss leader is likely generous usage limits on flat rate subscriptions.

For example GitHub Copilot Pro+ comes with 1500 premium requests a month. That's quite a lot and it's only $39.00. (Requests ~ Prompts).

For some time they were offering Opus 4.6 Fast at 9x billing (now raised to 30x).

That was upto 167 requests of around ~128k context for just $39. That ridiculous model costs $30/$150 Mtok so you can easily imagine the economics on this.

louiereederson 2 hours ago | parent | prev [-]

Referring to my earlier comment, you need to have a model for how to account for training costs. If Anthropic stops training models now, what happens to their revenues and margins in 12 months?

There's a difference between running inference and running a frontier model company.

simonw 2 hours ago | parent [-]

Training costs are fixed. You spend $X-bn training a model and that single model then benefits all of your customers.

Inference costs grow with your users.

Provided you are making a profit on that inference you can eventually cover your training costs if you sign up enough paying customers.

If you LOSE money on inference every new customer makes your financial position worse.