Remix.run Logo
belter 5 days ago

The real bottleneck isn’t Jevons paradox, it’s the Theory of Constraints. A human brain runs on under 20 W, yet every major LLM vendor is burning cash and running up against power supply limits.

If anything pops this bubble, it won’t be ethics panels or model tweaks but subscription prices finally reflecting those electricity bills.

At that point, companies might rediscover the ROI of good old meat based AI.

alwillis 5 days ago | parent | next [-]

At that point, companies might rediscover the ROI of good old meat based AI.

That’s like saying when the price of gasoline gets too high, people will stop driving.

Once a lifestyle is based on driving (like commuting from the suburbs to a job in the city), it’s quite difficult and in some cases, impossible without disrupting everything else.

A gallon of gas is about 892% higher in 2025 than it was in 1970 (not adjusted for inflation) and yet most people in the US still drive.

The benefits of LLMs are too numerous to put that genie back in the bottle.

We’re at the original Mac (128K of RAM, 9-inch B&W screen, no hard drive) stage of LLMs as a mainstream product.

belter 4 days ago | parent [-]

> when the price of gasoline gets too high

People get electric cars or public transport....

Nemo_bis 4 days ago | parent [-]

Indeed

> Adjusting for long-term ridership trends on each system, seasonal effects, and inertia (the tendency for ridership totals to persist from one month to the next), CBO estimates that the same increase of 20 per- cent in gasoline prices that affects freeway traffic volume is associated with an increase of 1.9 percent in average system ridership. That result is moderately statistically significant: It can be asserted with 95 percent confidence that higher gasoline prices are associated with increased ridership.

https://www.cbo.gov/sites/default/files/110th-congress-2007-...

hkt 5 days ago | parent | prev | next [-]

I suspect for this reason we are going to see a lot of attempts at applied AI: I saw an article semi-recently about an AI weather forecasting model using considerably less power than it's algorithmic predecessor, for instance. The answer is, as ever, to climb the value chain and make every penny (and joule) count.

TeMPOraL 5 days ago | parent | prev | next [-]

Where is this oft-repeated idea coming from? Inference isn't that expensive.

belter 5 days ago | parent [-]

My back of envelope estimate, is that even a partly restricted plan, would need to cost roughly $4,000–$4,500 per month just to break even.

dotancohen 5 days ago | parent | prev | next [-]

  > good old meat based AI.
NI, or really just I.

Though some of us might fall into the NS category instead.

margalabargala 5 days ago | parent | prev | next [-]

Meat has far higher input requirements for good performance above raw energy

belter 4 days ago | parent [-]

Hire Vegan Developers... :-)

ben_w 4 days ago | parent [-]

I'm not sure they meant that, but they might have.

An alternative reading is that we (i.e. "good old meat based AI") need more than just calories to make stuff.

margalabargala 4 days ago | parent [-]

> An alternative reading is that we (i.e. "good old meat based AI") need more than just calories to make stuff.

That was what I meant; we require also things like shelter, emotional wellbeing, and more, to operate at top performance levels.

ben_w 4 days ago | parent | prev [-]

> At that point, companies might rediscover the ROI of good old meat based AI.

I doubt this will look good for any party.

The global electricity supply is 375 W/capita, and there's a lot of direct evidence in the form of "building new power plants" that the companies are electricity-limited. I have long observed the trends of renewable energy, but even assuming their rapid exponential growth continues, they can only roughly double this by 2032.

If we just simplify the discussion about the quality of LLMs output as "about as good as a junior graduate", then the electricity bill can increase until the price curve of {the cost of supplying that inference} matches the price curve of {the cost of hiring a junior graduate}. If the electricity price is fixed, graduates can't earn enough to feed themselves. If the graduates earn the smallest possible amount of money needed to feed and house themselves in G7 nations, then normal people are priced out of using heating/AC, the street lights get turned off because municipalities won't be able to cover the bill. If the electricity for inference becomes as expensive as hiring Silicon Valley software engineering graduates, then normal people won't even be able to keep their phones charged.

That said:

> A human brain runs on under 20 W

Only if you ignore the body it's attached to, which we cannot currently live without. And we do also need a lot of time off, as we start working at 21 and stop at just under 70 (so 5/8ths of our lives), and the working week is 40 hours out of 168, and we need more time beyond that away from paid work for sickness and reproduction, and many of us also like holidays.

Between all the capacity factors, for every hour (@20W = 20 Wh) of the average American worker's brain being on a job, there's a corresponding average of about 1 kWh used by the bodies of various Americans.