Remix.run Logo
aurareturn a day ago

I don't have any evidence. You'll have to believe what Anthropic and OpenAI CEOs say publicly.

However, it seems to make a lot of sense. Anthropic literally added $6b ARR in February 2026 alone. I doubt training costs go up that fast.

ainch 19 hours ago | parent | next [-]

It's definitely true that they've increased their revenue rapidly. But at the same time the 'scaling laws' that the labs were first built around require exponentially-scaling cost (10x flops for a fixed reduction in training loss).

If anything, a better look at the economics is a reason to look forward to one of them IPO-ing. I suspect the labs probably could cut R&D and turn a profit, but that might only work for one generation, until they get superseded by the competition.

aurareturn 12 hours ago | parent [-]

There is no doubt that competition is what is driving unprofitability. So when people say AI can't be monetized, I laugh. Right now, foundational AI is unprofitable because of competition, not because they can't make money.

arctic-true a day ago | parent | prev [-]

But this is exactly the problem - we have to take it on faith that inference is profitable because nobody actually knows. It’s hard to even define what that would mean, and while I am suspicious of claims that frontier lab CEOs are just out-and-out liars or bad people, defining and calculating the real cost of inference would be time- and labor-intensive in its own right and there is no strong incentive to do it other than “tech reporters are curious.” Until the IPO, we just won’t know.

aurareturn a day ago | parent [-]

A lot of people know. A lot of insiders have been saying tokens are profitable. Is there a conspiracy theory for everyone to lie? Including OpenAI, Anthropic CEOs, employees, Cursor management, inference providers of Chinese models?

arctic-true a day ago | parent [-]

Profitable on what basis? They generate more revenue than the cost of electricity? Does that factor in the cost to service the massive, multi-layer cake of debt that was necessary to even begin to serve inference in the first place - not from a training perspective but from a hardware and facilities perspective?

aurareturn a day ago | parent [-]

Profitable as in every token they generate, they make some money.

And it's already mentioned that the path to profitability is that inference revenue eclipses training costs. It's already happening rapidly.

arctic-true a day ago | parent [-]

I’m not talking about training costs. I’m talking about startup costs. You have to pay for GPUs (or to rent data centers). You have to pay for the electricity that runs those data centers, and in a lot of cases these frontier labs are building the data centers on credit, so you need to pay for the construction, the materials, etc. If it was as simple as “running the GPUs costs less than we charge for it,” I might be inclined to agree. But the GPUs don’t just appear by magic.

aurareturn a day ago | parent [-]

Right now, the demand is far more than supply for GPUs. Every cloud company is saying they're leaving money on the table because they don't have enough compute to serve the demand.

It seems like you're arguing that the bubble is going to collapse soon, like the author? How can it collapse when the demand is so much bigger than supply? Do you think the demand is fake? Or that AI will stop making progress from here on out?

arctic-true a day ago | parent [-]

The demand is real. The tech is real. The economics are completely unsustainable. Switching costs and barriers to entry are too low, operating costs are too high. And if the tech improves, it actually makes it even easier for competitors to swoop in and take market share. Not long ago, an agent that was 80% as good as SOTA was not usable. A year from now, an agent that is 80% as good as SOTA will be better than the best agent is today. We have it on good authority that today’s agents are very good, very useful. Why bother paying full price?

This is deeply ironic in a way. Because the whole premise of AI labor replacement is that AI does not need to be better than human labor, it just needs to be cheaper with acceptable performance. But the same is true one step down: discount AI doesn’t need to be better than bleeding-edge AI, it just needs to be cheaper with acceptable performance.