Remix.run Logo
Avicebron 9 hours ago

I wonder if "AGI" is going to end up like quantum computing. With expectations and predictions so unmoored from reality, that everyone just sort of pretends it's a thing without every actually genuinely thinking about what's going on.

Edit: words

jordanb 9 hours ago | parent | next [-]

The history of AI since the 1960s is slow and incremental improvement where the public loses interest for a decade or so, then notices the last decade of improvement when someone released a glitzy demo, followed by an investment frenzy with a bunch of hucksters promising that hal 9000 is two years away, followed by the zeitgeist forgetting about it for another decade-ish.

This has happened at least five times so far.

cogman10 9 hours ago | parent [-]

I'd say we are getting pretty close to the "now or never" point of AGI.

We are pretty close to the limits of fabrication for transistors. Barring radically different manufacturing and/or ASIC development the performance we have today will be the performance available in 10 years (I predict we'll maybe 2x compute performance in 10 years).

If you've paid attention, you've already seen the slowdown of compute development. A 3060 GPU isn't really significantly slower than a 5060 even though it's 5 years old now.

wood_spirit 9 hours ago | parent [-]

A human neuron is a thousand times bigger than a transistor.

There are directions hardware and algorithms have been going in - parallel processing - that are not limited by fabrication?

cogman10 8 hours ago | parent | next [-]

> A human neuron is a thousand times bigger than a transistor.

Correct, it works on principles currently completely unapplied in ASIC design. We don't, for example, have many mechanisms that allow for new pathways to be formed in hardware. At least, not outside of highly controlled fashion. It's not clear that it would even be helpful if we did.

> There are directions hardware and algorithms have been going in - parallel processing - that are not limited by fabrication?

They are limited by the power budget. Yes we can increase the amount of parallel compute 100x but not without also increasing the power budget by 100x.

But further, not all problems can be made parallel. Data dependencies exist and those always slow things down. Further, coordination isn't free for parallel algorithms.

I'm not saying there's not some new way to do computation which hasn't been explored. I'm saying we've traveled down a multi-decade path to today's compute capabilities and we may be at the end of this road. Building a new model that's ultimately adopted will (likely) take more decades. I mean, consider how hard it's been to purge x86 from society. We are looking at a problem a million times more difficult than just getting rid of x86.

oidar 9 hours ago | parent | prev [-]

Transistors will never reach the efficiency of a neuron. A transistor is too limited in it's connections.

antegamisou 8 hours ago | parent | prev [-]

This is easily the case for most laypeople, in my experience at least. Plenty of people fairly taken aback by GenAI's capabilities, some of them have genuinely expressed concern for human intelligence extinction very soon.