Remix.run Logo
vessenes 4 days ago

Almost every model trained by the majors has paid for itself with inference fees.

I’m not saying there isn’t a bubble, but I am saying if the researchers and strategists absolutely closest to the “metal” of realtime frontier models are correct that AGI is in reach, then this isn’t a bubble, it’s a highly rational race. One that large players seem to be winning right now.

jsheard 4 days ago | parent | next [-]

> Almost every model trained by the majors has paid for itself with inference fees.

Even if we assume this is true, the downstream customers paying for that inference also need it to pay for itself on average in order for the upstream model training to be sustainable, otherwise the demand for inference will dry up when the music stops. There won't always be a parade of over-funded AI startups burning $10 worth of tokens to bring in $1 of revenue.

Rover222 4 days ago | parent | next [-]

My employer spends $100k/month or more on OpenAI fees. Money well spent, in both product features and developer process. This is just one fairly small random startup. Thousands of companies are spending this money and making more money because of it.

Rebuff5007 4 days ago | parent [-]

Curious what makes you think the money is well spent.

I can maybe digest the fact that it helped prototype and ship a bit more code in a shorter time frame... but does that warrant in enough new customers or a higher value product that would justify $100k a month?!

Rover222 3 days ago | parent | next [-]

Probably 80% of that money goes towards product features that are crucial to retention and acquisition of customers, and the business is profitable. Could those features exist without AI integrations? Some yes, but the data would be limited/inferior, other features would not be possible at all.

The 20% spent on dev tooling seems well-spent. About 10 devs on the team, and all at least 2x (hard to measure exactly, but 2x seems conservative) more productive with these tools.

neutronicus 4 days ago | parent | prev [-]

Some of that $100k/month might be powering the features, rather than supporting development.

Rover222 3 days ago | parent [-]

yeah it's probably 80% going to product features (processing/classifying data, and agentic workflow features), and 20% to dev tools

onesociety2022 4 days ago | parent | prev | next [-]

Isn't most of OpenAI revenue from end users and not revenue from token sales? For Anthropic, it is the opposite where almost all of their revenue comes from API usage. So even if AGI/ASI don't pan out, OpenAI will have a great consumer-focused inference business where they build useful applications (and new devices) using existing state-of-the-art LLMs and stop investing heavily in the next gen model training? I think potentially just replacing Google Search and smartphones with a new AI device would be massive consumer businesses that OpenAI could potentially go after without any major advancements in AI research.

vessenes 3 days ago | parent | prev | next [-]

I’m the other way — the cost of launching a creative / interesting software company / project just got cut to 1% or so. (I said launching. Maintaining … obviously not quite as good on the numbers).

I propose software creation, and therefore demand for software creation are subject to Jevon’s Paradox.

ben_w 4 days ago | parent | prev [-]

Tokens that can be purchased for $10 may or may provide the purchaser with almost any dollar denominated result, from negative-billions* to postive-billions**.

Right now, I assume more the former than the latter. But if you're an optimistic investor, I can see why one might think a few hundred billion dollars more might get us an AI that's close enough to the latter to be worth it.

Me, I'm mostly hoping that the bubble pops soon in a way I can catch up with what the existing models can already provide real help with (which is well short of an entire project, but still cool and significant).

* e.g. the tokens are bad financial advice that might as well be a repeat of SBF

** how many tokens would get you the next Minecraft?

sylario 4 days ago | parent | prev | next [-]

The thing is that AI researchers that are not focused on only LLM do not seem to think it is in reach.

sindriava 4 days ago | parent [-]

Demis Hassabis seems to think this and not only does he not focus only on LLMs, he got a nobel prize for a non-LLM system ;)

belter 4 days ago | parent [-]

As far as I know, that Nobel prize was for being the project manager...

vessenes 3 days ago | parent [-]

If you talk to any of his early investors, they considered him absolutely crucial to the project.

belter 2 days ago | parent [-]

They say the same about Sam Altman....

mossTechnician 4 days ago | parent | prev | next [-]

Which of these model-making companies have posted a profit? I'm not familiar with any.

vessenes 4 days ago | parent [-]

They account internally for each model separately; Dario said they even think of each model as a separate company on Dwarkesh some time ago.

Inference services are wildly profitable. Currently companies believe it’s economically sensible to plow that money into R&D / Investment in new models through training.

For reference, oAI’s monthly revs are reportedly between $1b and $2b right now. Monthly. I think if you do a little napkin math you’ll see that they could be cashflow positive any time they wanted to.

jenkinomics 4 days ago | parent | next [-]

Again with the "this is very profitable if you don't account for the cost of creating it?"

Then my selling 2 dollars for 1 dollar is a wildly profitable business as well! Can't sell them fast enough!

Why does it seem like so many people have ceased to think critically?

vessenes 3 days ago | parent | next [-]

You have LLM derangement syndrome, and don’t understand.

Say the first model cost $2 to make. On metered sales, they’ve made $10 on it.

They then decide to make a $20 model, raising more money. It turns out, that model made $100.

They then decide to make a $1,000 model. That model made $5,000.

There are two possible paths for their shiny new $10,000 model: either it will be a better market fit than the 1k model, or it will not.

If it is a better market fit than the 1k model, then it seems very likely that at some point it will make more than $10,000 (2x the prior model’s utility).

If it does not provide better value, then you can scrub that model, and keep selling the $1k model. Eventually it will likely provide the additional $5k of investor capital back through profits.

What we have seen is this above scenario, with a couple twists: first, the training (capital investment) decisions overlay the useful life of the prior model, so you have to tease out the profitability when you think strategy. Second, it turns out there’s quite a lot of money to be made distilling models the market likes into models that give like 90% better profit.

So, these businesses paying billions of dollars to train frontier models are absolutely rational actors. They are aggressive actors, engaged in an arms race, and not all of them will survive. But right now, with current inference demand, if all the global training capital dried up, (and therefore we are stuck with current models for some time), they would become highly, highly profitable companies during the period where fast followers tried to come in and compete on price.

nouarngin a day ago | parent [-]

Is the profitable model that makes 5x its cost in revenue here in the room with us right now?

neutronicus 4 days ago | parent | prev [-]

OpenAI claims that each GPT generation has sold enough inference at high enough margin to recoup the cost of training it.

The company overall is still not profitable because these proceeds are being used to fund training the next GPT generation.

4 days ago | parent | prev [-]
[deleted]
ACCount37 4 days ago | parent | prev | next [-]

Ever since NLP and CSR, the two unassailable fortresses of every AI winter, fell to LLMs? I had no doubt that AGI is within reach.

It's less "will it happen" now, and more "whether it hits in a few decades or in a few years".

mountainriver 4 days ago | parent | prev [-]

The idea that it’s a bubble on the frontier model side is insane. AI assisted coding alone makes it the most valuable thing we’ve ever created.

switchers 4 days ago | parent | next [-]

Get your head out of the proverbial, a bullshitting machine that lets some developers do things faster if they modify how they develop isn't even close to the most valuable thing we've ever created.

vessenes 3 days ago | parent | next [-]

I think you’re wrong. Consider the following. It’s 1995. You and your next door neighbour Jeff Bezos have both just raised $10mm from competing VCs to build amazon.com.

You can choose to have a Claude API portal to the future where you pay 2025 prices for token inference, or you can skip it, and use 1995 devs to build your competitor.

Which do you do?

mountainriver 4 days ago | parent | prev [-]

It easily is, nothing else is even remotely close. Software is the most valuable industry on earth and we are well on our way to fully commoditizing it.

4 days ago | parent | prev | next [-]
[deleted]
vessenes 3 days ago | parent | prev [-]

Totally agreed.