Remix.run Logo
indigodaddy 3 days ago

Funny how this guy thinks he knows exactly what's up with AI, and how "others" are "partly right and wrong." Takes a bit of hubris to be so confident. I certainly don't have the hubris to think I know exactly how it's all going to go down.

Razengan 2 days ago | parent | next [-]

How about a vague prediction that covers all scenarios? XD

*ahem* It's gonna be like every other tool/societal paradigm shift like the smartphone before this, and planes/trains/cars/ships/factories/electricity/oil/steam/iron/bronze etc. before that:

• It'll coalesce into the hands of a few corporations.

• Idiots in governments won't know what the fuck to do with it.

• Lazy/loud civvies will get lazier/louder through it.

• There'll be some pockets of individual creativity and freedom, like open source projects, that will take varying amounts of time to catch on in popularity or fade away to obscurity.

• One or two killer apps that seem obvious but nobody thought of, will come out of nowhere from some nobody.

• Some groups will be quietly working away using it to enable the next shift, whether they know it or not.

• Aliens will land turning everything upside down. (I didn't say when)

Razengan 2 days ago | parent [-]

Forgot:

• Militaries will want to kill everyone with it.

fragmede 3 days ago | parent | prev | next [-]

But do you have the audacity to be wrong?

indigodaddy 3 days ago | parent [-]

Yeah that's interesting, good perspective

ivape 3 days ago | parent | prev | next [-]

The problem is that the bubble people are so unimaginative, similar to Krugman, that those who have any inkling of an imagination can literally feel like visionaries compared to them. I know I’m describing Dunning-Krueger, but so be it, the bubble people are very very wrong. It’s like, man, they really are unable to imagine a very real future.

techblueberry 3 days ago | parent | next [-]

It’s a weird comparison since internet in the dial-up age was a bubble, are you saying the hype machine for AI is in fact smaller than the internet? Are you implying that AI will in fact grow that much more slowly and sustainably than the internet, despite trillions of investment?

Do you think Sam Altman, Jeff Bezos, and Mark Zuckerberg are all wrong saying that we’re in a bubble? Do they “lack imagination?”

Also? What do I need imagination for, isn’t that what AI does now?

timeinput 2 days ago | parent | next [-]

That’s a sharp and layered question, and I think you’re cutting right to the heart of the current tension around AI.

Libidinalecon a day ago | parent | prev [-]

Not only was it a bubble but we actually had the internet. We didn't have to use our imaginations and build for the technology yet to be invented and we still had a massive damaging bubble.

Even beyond that, the Soviet Union had just collapsed. The US had a balanced budget and 10X less debt. Globalization was just getting started. China was nothing compared to what it was today. It was the absolute most stable time of my life. Basically the opposite of 2025.

"This time things are different because AI blah blah blah".

When you boil it down, all bubbles are based around the idea of using your imagination to image how "this time it is different".

The people who don't think this is a bubble are not seeing that their imagination is a bug and not a feature here.

bccdee 3 days ago | parent | prev | next [-]

I find the argument for the bubble to be extremely straightforward.

Currently, investment into AI exceeds the dot-com bubble by a factor of 17. Even in the dot-com era, the early internet was already changing media and commerce in fundamental ways. November is the three-year anniversary of ChatGPT. How much economic value are they actually creating? How many people are purchasing AI-generated goods? How much are people paying for AI-provided services? The value created here would have to exceed what the internet was generating in 2000 by a factor of 17 (which seems excessive to me) to even reach parity with the dot-com bubble.

"But think where it'll be in 5 years"—sure, and let's extrapolate that based on where it is now compared to where it was 3 years ago. New models present diminishing returns. 3.5 was groudbreaking; 4 was a big step forward; 5 is incremental. I won't deny that LLMs are useful, and they are certainly much more productized now than they were 3 years ago. But the magnitude of our collective investment in AI requires that a huge watershed moment be just around the corner, and that makes no sense. The watershed moment was 3 years ago. The first LLMs created a huge amount of potential. Now we're realizing those gains, and we're seeing some real value, but things are also tapering off.

Surely we will have another big breakthrough some day—a further era of AI which brings us closer to something like AGI—but there's just no reason to assume AGI will crop up in 2027, and nothing less that that can produce the ROI that such enormous valuations will eventually, inexorably, demand.

lucaslazarus 3 days ago | parent | next [-]

I don’t get why people find it so hard to understand that a technology can be value-additive and still be in a position of massive overinvestment. Every generation of Californians seeks to relive the 1848 gold rush, spending millions excavating rivulets for mere ounces of (very real!) gold.

petesergeant 2 days ago | parent | next [-]

Exactly this. The future impact of AI and the financial credibility of OpenAI as a business are completely distinct.

m4rtink 2 days ago | parent | prev [-]

Not to mention the 1848 gold rush pretty destroyed the existing society, culture and businesses:

https://en.wikipedia.org/wiki/California_gold_rus

Not to mention thousands of native inhabitants getting killed or enslaved:

https://en.wikipedia.org/wiki/California_genocide

ivape 2 days ago | parent | prev | next [-]

What is AGI in your mind? Let's take someone who once upon a time was responsible for grading papers. As far as that person is concerned, AGI has arrived for their profession (it arrived nearly two years ago for them). You'll never be better than something that has read every book ever and can write better than you. AGI will come in tranches. Are you really going to hire that developer because you need extra manpower to stand up test coverage? No, so as far as that developer is concerned, AGI has arrived for that part of their professional career.

The bet is not that there will be this one seminal moment of AGI where all the investment will make sense. The bet is that it has already showed up if you look for specific things and will continue to do so. I wouldn't bet against the idea that LLMs will introduce itself to all jobs, one at a time. Reddit moderators, for example, will meet AGI (as far as they know, their entire world being moderating) sooner than say, I don't know, a Radiologist.

The universe of people getting paid to make CRUD apps is over. Many here will be introduced to AGI faster and sooner. Then it could be phone customer support representatives. It could show up for the face-to-face worker who is now replaced by a screen that can talk to customers (which already arrived yesterday, it's here). It'll appear erratic and not cohesive, unless you zoom out and see the contagion.

---

Rome needed to recognize that the Barbarian hordes had arrived. Pay attention to all the places the invasion has landed. You can pretend like the Vandals are not in your town for a little bit, sure, but eventually they will be knocking on many doors (most likely all doors). We're in a time period of RADICAL transformation. There is no half-assing this conviction. Practicality will not serve us here.

visarga 2 days ago | parent | next [-]

> Are you really going to hire that developer because you need extra manpower to stand up test coverage? No, so as far as that developer is concerned, AGI has arrived for that part of their professional career.

That is exactly what you need in order to make AI useful. Even a baby needs to cry to signal its needs to parents, which are like ASI to it. AI working on a task lacks in 3 domains: start, middle and finish.

AI cannot create its own needs, they belong to the context where it is used. After we set AI to work, it cannot predict the outcomes of its actions unless they pass through your context and return as feedback. In the end, all benefits accumulate in the same context. Not to mention costs and risks - they belong to the context.

The AI is a generalist, context is exactly what it lacks. And context is distributed across people, teams, companies. Context is non-fungible. You can't eat so I get satiated. Context is what drives AI. And testing is the core contextual activity when using AI.

bccdee 2 days ago | parent | prev [-]

> Let's take someone who once upon a time was responsible for grading papers. As far as that person is concerned, AGI has arrived for their profession

You're talking about TAs. I know TAs. Their jobs have not disappeared. They are not using AI to grade papers.

> Are you really going to hire that developer because you need extra manpower to stand up test coverage?

Yes. Unsupervised AI agents cannot currently replace developers. "Oh we'll get someone to supervise it"—yes, that person's job title is "developer" and they will be doing largely the same job they'd have done 5 years ago.

> The universe of people getting paid to make CRUD apps is over.

Tell that to all the people who get paid to make CRUD apps. Frankly, Airtable has done more to disrupt CRUD apps than AI ever did.

> Rome needed to recognize that the Barbarian hordes had arrived.

IDK what to tell you. All these jobs are still around. You're just fantasizing.

duderific 2 days ago | parent [-]

Can you seriously deny that it will take many fewer developers using LLMs to do what was done without LLMs? Many if not most software companies have either greatly slowed hiring, or laid off tranches of developers.

Not all developer jobs will disappear, but there will most certainly be fewer available. Any new grad can tell you how hard it is to find a software engineering job nowadays.

Certainly we cannot just let an AI spin and build software unattended. But what used to take days can now be done in minutes.

askl 2 days ago | parent | prev | next [-]

> How many people are purchasing AI-generated goods?

Probably a lot. I remember my mom recently showing me an AI-generated book she bought. And pretty much immediately refunded it. Not because it was AI, but because the content was trash.

plastic3169 2 days ago | parent | prev | next [-]

> Even in the dot-com era, the early internet was already changing media and commerce in fundamental ways.

I agree that AI is overhyped but so was the early web. It was projected to do a lot of things ”soon”, but was not really doing that much 4 years in. I don’t think the newspapers or commerce were really worried about it. The transformation of the business landscape took hold after the crash.

tim333 3 days ago | parent | prev | next [-]

That "factor of 17" comes from an interest rate model that is unrelated to AI.

lucaslazarus 3 days ago | parent [-]

This is not true. Obviously the underlying effect is real but not nearly to this scale—for instance, neither the CPI nor the S&P500 are even remotely close to 17x higher than they were at the turn of the millennium.

tim333 2 days ago | parent [-]

The source is a report written by Julien Garran based on the difference between actual interest rates and an idea of what they should be called the Wicksell spread. There's a summary here https://www.marketwatch.com/story/the-ai-bubble-is-17-times-...

He figured there was a credit bubble like that around the time of the dot com bubble and now but the calculation if purely based on interest rates and the money can go into any assets - property, stocks, crypto etc. It's not AI specific.

He explains it here https://youtu.be/uz2EqmqNNlE

The Wicksell spread seems to have come from Wicksell's proposed 'natural rate of interest' detailed in his 1898 book

https://en.wikipedia.org/wiki/Knut_Wicksell#Interest_and_Pri...

lucaslazarus 2 days ago | parent [-]

I see, thank you!

sumedh 2 days ago | parent | prev [-]

> The value created here would have to exceed what the internet was generating

Its precisely why these companies are investing so much, robots combined with AI will be creating that value.

bccdee 2 days ago | parent [-]

> Robots combined with AI will be creating that value.

Will they? Within what timeframe? Because a bubble economy can't be told to "just hang on a few more years" forever. LLMs are normal technology; they will not suddenly become something they are not. There's no indication that general intelligence is right on the horizon.

sumedh 2 days ago | parent [-]

If you think they wont then you should short the stocks starting with Nvidia and get rich.

> There's no indication that general intelligence is right on the horizon.

You dont need general intelligence for all the tasks, if a robot can do some of those tasks with limited intelligence cheaper than a humnan, that is all corporations care about.

teaearlgraycold 3 days ago | parent | prev [-]

Almost everyone I hear calling our AI hype machine a bubble aren't claiming AI is a short term fluke. They're saying the marketing doesn't match the reality. The companies don't have the revenue they need. The model performance is hitting the top of the S curve. Essentially, this is the first big wave - but it'll be a while before the sea level rises permanently.

bdangubic 3 days ago | parent [-]

> marketing doesn't match the reality.

true for every marketing ever

an0malous 3 days ago | parent [-]

It’s not just a marketing stunt, it’s a trillion dollar grift that VCs are going to try to dump off onto the public markets when the reality doesn’t catch up to the hype fast enough

confirmmesenpai 3 days ago | parent | prev [-]

takes a lot of hubris to be sure it's a bubble too.

hitarpetar 3 days ago | parent [-]

that's why I always identify the central position of any argument and take it. that way noone can accuse me of hubris

yunnpp 3 days ago | parent | next [-]

Spoken like a wise man.

JohnnyMarcone 2 days ago | parent | prev [-]

You can take a position without being sure about it. e.g. "I'm at 70% that AI is a bubble."

hitarpetar 2 days ago | parent [-]

id probably go with 50% actually