Remix.run Logo
jodrellblank 6 hours ago

LLMs aren't AGI and maybe aren't a path to AGI, but step back and look at the way the world is changing. Hard disks were invented by IBM in 1953 and now less than a hundred years later there's an estimated million terabytes a year of hard disks made and sold, and a total sold of Mega, Giga, Tera, Peta, Exa, Zetta ... 1.36 Zettabytes.

In 2000, webcams were barely a thing, audio was often recorded to dictaphone tapes, and now you can find a recorded photo or video of roughly anyone and anything on Earth. Maybe a tenth of all humans, almost any place, animal, insect, or natural event, almost any machine, mechanism, invention, painting, and a large sampling of "indoors" both public and private, almost any festival or event or tradition, and a very large sampling of "people doing things" and people teaching things for all kinds of skills. And tons of measurements of locations, temperatures, movements, weather, experiment results, and so on.

The ability of computers to process information jumped with punched card readers, with electronic computers in the 1950s, again with transistors in the 1970s, semiconductors in the 1980s, commodity computer clusters (Google) in the 1990s, maybe again with multi-core desktops for everyone in the 2000s, with general purpose GPUs in the 2010s, and with faster commodity networking from 10Mbit to 100Gbit and more, and with SATA, then SAS, then RAID, then SSDs.

It's now completely normal to check Google Maps to look at road traffic and how busy stores are (picked up in near realtime from the movement of smartphones around the planet), to do face and object recognition and search in photos, to do realtime face editing/enhancement while recording on a smartphone, to track increasing amounts of exercise and health data from increasing numbers of people, to call and speak to people across the planet and have your voice transcribed automatically to text, to realtime face-swap or face-enhance on a mobile chip, to download gigabytes of compressed Wikipedia onto a laptop and play with it in a weekend in Python just for fun.

"AI" stuff (LLMs, neural networks and other techniques, PyTorch, TensorFlow, cloud GPUs and TPUs), the increase in research money, in companies competing to hire the best researchers, the increase in tutorials and numbers of people around the world wanting to play with it and being able to do that ... do you predict that by 2030, 2035, 2040, 2045, 2050 ... 2100, we'll have manufactured more compute power and storage than has ever been made, several times over, and made it more and more accessible to more people, and nothing will change, nothing interesting or new will have been found deliberately or stumbled upon accidentally, nothing new will have been understood about human brains, biology, or cognition, no new insights or products or modelling or AI techniques developed or become normal, no once-in-a-lifetime geniuses having any flashes of insight?

danaris 2 hours ago | parent [-]

I mean, what you're describing is technological advancement. It's great! I'm fully in favor of it, and I fully believe in it.

It's not the singularity.

The singularity is a specific belief that we will achieve AGI, and the AGI will then self-improve at an exponential rate allowing it to become infinitely more advanced and powerful (much moreso than we could ever have made it), and it will then also invent loads of new technologies and usher in a golden age. (Either for itself or us. That part's a bit under contention, from my understanding.)

jodrellblank 17 minutes ago | parent [-]

> "The singularity is a specific belief that we will achieve AGI

That is one version of it, but not the only one. "John von Neumann is the first person known to have discussed a "singularity" in technological progress.[14][15] Stanislaw Ulam reported in 1958 that an earlier discussion with von Neumann "centered on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue""[1]. A time when people before it, would be unable to predict what came after it because it was so different. (And which I argue in another comment[2] is not a specific cutoff time, but a trend over history of the future being increasingly hard to predict over shorter and shorter timeframes).

Apart from AGI, or Von Neuman accelerationism, I also understand it as augmenting human intelligence: "once we become cyborgs and enhance our abilities, nobody can predict what comes next"; or artificial 'life' - "if we make self-replicating nano-machines (that can have Darwinian natural selection?), all bets about the future are off"; or "once we can simulate human brains in a machine, even if we can't understand how they work, we can run tons of them at high speeds".

> and usher in a golden age. (Either for itself or us. That part's a bit under contention, from my understanding.)

Arguably, we have built weakly-superhuman entities, in the form of companies. Collectively they can solve problems that individual humans can't, live longer than humans, deploy and exploit more resources over larger areas and longer timelines than humans, and have shown a tendency to burn through workers and ruin the environment that keeps us alive even while supposedly guided by human intelligence. I don't have very much hope that a non-human AGI would be more aligned with our interests than companies made up of us are.

[1] https://en.wikipedia.org/wiki/Technological_singularity

[2] https://news.ycombinator.com/item?id=46935546