Remix.run Logo
0xbadcafebee 9 hours ago

Fwiw, Railroads were the reason for some of the biggest bank collapses in history. Panic of 1873 was literally called "The Great Depression" (until a greater depression hit). 20 years later was the Panic of 1893. Both were due to over-investment and a bubble bursting, and they took out tons of banks and businesses.

We're seeing exactly the same thing with AI, as there is massive investment creating a bubble without a payoff. We know that the value will lower over time due to how software and hardware both gets more efficient and cheaper. And so far there's no evidence that all this investment has generated more profit for the users of AI. It's just a matter of time until people realize and the bubble bursts.

And when the bubble does burst, what's going to happen? Most of the investment is from private capital, not banks. We don't know where all that private capital is coming from, so we don't know what the externalities will be when it bursts. (As just one possibility: if it takes out the balance sheets of hyperscalers and tech unicorns, and they collapse, who's standing on top of them that collapses next? About half the S&P 500 - so 30% of US households' wealth - but also every business built on top of those mega-corps, and all the people they employ) Since it's not banks failing, they probably won't be bailed out, so the fallout will be immediate and uncushioned.

wr2 2 hours ago | parent | next [-]

Lol a bit dramatic at the end. There will be a correction in stocks that were priced in for growth related to AI.

But what I see is the two big costs for America:

1) Less money being invested into risky AI projects in general, in both public (via cash flows from operations) and private markets 2) The large tech firms who participated in large capex spend related to AI projects won't be trusted with their cash balances - aka having to return more cash and therefore less money for reinvestment

All the hype and fanfare that draws in investment at al comes with a cost - you gotta deliver. People have an asymmetric relationship between gains and losses.

keeda 7 hours ago | parent | prev [-]

> We're seeing exactly the same thing with AI, as there is massive investment creating a bubble without a payoff.

...

And so far there's no evidence that all this investment has generated more profit for the users of AI.

If you look around a bit, you will find evidence for both. Recent data finds pretty high success in GenAI adoption even as "formal ROI measurement" -- i.e. not based on "vibes" -- becomes common: https://knowledge.wharton.upenn.edu/special-report/2025-ai-a... (tl;dr: about 75% report positive RoI.)

The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.

Preliminary evidence, but given this weird, entirely unprecedented technology is about 3+ years old and people are still figuring it out (something that report calls out) this is significant.

0xbadcafebee 4 hours ago | parent | next [-]

75% report positive ROI (and the VPs are much more "optimistic" than the middle managers who are closer to the work) - but how much ROI? 1%? The fact that they don't quote a figure at all is pretty telling. And that's the ROI of the people buying the AI services, which are often heavily subsidized. If it costs a billion dollars to give a mid-sized company a 1% ROI, that doesn't sound sustainable.

I would love to see another report that isn't a year old with actual ROI figures...

chatmasta 2 hours ago | parent [-]

It’s not easy to quantify because you’re basically substituting or augmenting labor. How do you quantify an ROI on employees? You can look at profit of a project they’re hired to execute. But with AI, it’s mixed with the employees, so how do you distinguish the ROI of the two? With time, we might be able to make comparisons, but outside of very specific scenarios it’s difficult to quantify.

SlinkyOnStairs 4 hours ago | parent | prev [-]

> The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.

It honestly just isn't that interesting. (Being most notable for people misunderstanding and misrepresenting the chart on page 46 of the report as being "ROI" rather than "ROI measurement")

In terms of ROI figures, it's really just a survey with the question "Based on internal conversations with colleagues and senior leadership, what has been the return on investment (ROI) from your organization's Gen AI initiatives to date?".

This doesn't mean much. It's not even dubiously-measured ROI data, it's not ROI data at all, it's just what the leadership thinks is true.

And that's a worrying thing to rely on, as it's well documented (and measured by the report's next question) that there's a significant discrepancy in how high level leadership and low-level leadership/ICs rate AI "ROI".

One of the main explanations of that discrepancy being Goodhart's law. A large amount of companies are simply demanding AI productivity as a "target" now, with accusations of "worker sabotage" being thrown around readily. That makes good economy-wide data on AI ROI very hard to get.