Remix.run Logo
EliRivers 2 days ago

Would we even recognise it if it arrived? We'd recognise human level intelligence, probably, but that's specialised. What would general intelligence even look like.

Tuna-Fish 2 days ago | parent | next [-]

If/when we will have AGI, we will likely have something fundamentally superhuman very soon after, and that will be very recognizable.

This is the idea of "hard takeoff" -- because the way we can scale computation, there will only ever be a very short time when the AI will be roughly human-level. Even if there are no fundamental breakthroughs, the very least silicon can be ran much faster than meat, and instead of compensating narrower width execution speed like current AI systems do (no AI datacenter is even close to the width of a human brain), you can just spend the money to make your AI system 2x wider and run it at 2x the speed. What would a good engineer (or, a good team of engineers) be able to accomplish if they could have 10 times the workdays in a week that everyone else has?

This is often conflated with the idea that AGI is very imminent. I don't think we are particularly close to that yet. But I do think that if we ever get there, things will get very weird very quickly.

EliRivers 2 days ago | parent | next [-]

Would AGI be recognisable to us? When a human pushes over an anthill, what do the ants think happened? Do they even know the anthill is gone; did they have concept of the anthill as a huge edifice, or did they only know earth to squeeze through and some biological instinct.

If general intelligence arrived and did whatever general intelligence would do, would we even see it? Or would there just be things that happened that we just can't comprehend?

card_zero 2 days ago | parent | prev [-]

But that's not ten times the workdays. That's just taking a bunch of speed and sitting by yourself worrying about something. Results may be eccentric.

Though I don't know what you mean by "width of a human brain".

Tuna-Fish 2 days ago | parent [-]

It's ten times the time to work on a problem. Taking a bunch of speed does not make your brain work faster, it just messes with your attention system.

> Though I don't know what you mean by "width of a human brain".

A human brain contains ~86 billion neurons connected to each other through ~100 trillion synapses. All of these parts work genuinely in parallel, all working together at the same time to produce results.

When an AI model is being ran on a GPU, a single ALU can do the work analogous of a neuron activation much faster than a real neuron. But a GPU does not have 86 billion ALUs, it only has ~<20k. It "simulates" a much wider, parallel processing system by streaming in weights and activations and doing them 20k at a time. Large AI datacenters have built systems with many GPUs working in parallel on a single model, but they are still a tiny fraction of the true width of the brain, and can not reach anywhere near the same amount of neuron activations/second that a brain can.

If/when we have a model that can actually do complex reasoning tasks such as programming and designing new computers as well as a human can, with no human helping to prompt it, we can just scale it out to give it more hours per day to work, all the way until every neuron has a real computing element to run it. The difference in experience for such a system for running "narrow" vs running "wide" is just that the wall clock runs slower when you are running wide. That is, you have more hours per day to work on things.

card_zero 2 days ago | parent [-]

That's what I was trying to express, though: if "the wall clock runs slower", that's less useful than it sounds, because all you have to interact with is yourself.

I exaggerate somewhat. You could interact with databases and computers (if you can bear the lag and compile times). You could produce a lot of work, and test it in any internal way that you can think of. But you can't do outside world stuff. You can't make reality run faster to keep up with your speedy brain.

Tuna-Fish 2 days ago | parent [-]

You can interact with yourself, and everyone else like you.

There is a lot of important work where humans thinking about things is the bottleneck.

card_zero a day ago | parent [-]

Possibly. Here we imagine a world of artificial people - well, a community, depending how many of these people it's feasible to maintain - all thinking very fast and communicating in some super-low-latency way. (Do we revive dial-up? Or maybe they all live in the same building?) And they presumably have bodies, at least one each. But how fast can they do things with their bodies? Physics becomes another bottleneck. They'd need lots of entertainment to keep them in a good mood while they wait for just about any real-world process to complete.

I still contend that it would be a somewhat mediocre super power.

GeorgeTirebiter 2 days ago | parent | prev | next [-]

Mustafa Suleyman says AGI is when a (single) machine can perform every cognitive task better than the best humans. That is significantly different from OpenAIs definition (...when we make enough $$$$$, it's AGI).

Suleyman's book "The Coming Wave" talks about Artificial Capable Intelligence (ACI) - between today's LLMs (== "AI" now) and AGI. AI systems capable of handling a lot of complex tasks across various domains, yet not being fully general. Suleyman argues that ACI is here (2025) and will have huge implications for society. These systems could manage businesses, generate digital content, and even operate core government services -- as is happening on a small scale today.

He also opines that these ACIs give us plenty of frontier to be mined for amazing solutions. I agree, what we have already has not been tapped-out.

His definition, to me, is early ASI. If a program is better than the best humans, then we ask it how to improve itself. That's what ASI is.

The clearest thinker alive today on how to get to AGI is, I think, Yann LeCun. He said, paraphrasing: If you want to build an AGI, do NOT work on LLMs!

Good advice; and go (re-?) read Minsky's "Society of Mind".

shmatt 2 days ago | parent | prev | next [-]

We sort of are able to recognize Nobel-worthy breakthroughs

One of the many definitions I have for AGI is being able to create the proofs for the 2030, 2050, 2100, etc Nobel Prizes, today

A sillier one I like is that AGI would output a correct proof that P ≠ NP on day 1

tough 2 days ago | parent [-]

Isn't AGI just "general" intelligence as in -like a regular human- turing test kinda deal?

aren't you thinking about ASI/ Superintelligence way capable of outdoing humans?

kadushka 2 days ago | parent [-]

Yes, a general consensus is AGI should be able to perform any task an average human is able to perform. Definitely nothing of Nobel prize level.

EliRivers 2 days ago | parent | next [-]

A bit poorly named; not really very general. AHI would be a better name.

timeon 2 days ago | parent | next [-]

AAI would be enough for me, although there are people who deny intelligence of non-human animals.

kadushka 2 days ago | parent | prev [-]

Another general consensus is that humans possess general intelligence.

EliRivers 2 days ago | parent [-]

Yes, we do seem to have a very high opinion of ourselves.

aleph_minus_one 2 days ago | parent | prev [-]

> Yes, a general consensus is AGI should be able to perform any task an average human is able to perform.

The goalposts are regularly moved so that AI companies and their investors can claim/hype that AGI will be around in a few years. :-)

kadushka 2 days ago | parent [-]

I learned the definition I provided back in mid 90s, and it hasn't really changed since then.

xnx 2 days ago | parent | prev | next [-]

There's a test for this: https://arcprize.org/arc-agi

Basically a captcha. If there's something that humans can easily do that a machine cannot, full AGI has not been achieved.

2 days ago | parent [-]
[deleted]
dingnuts 2 days ago | parent | prev | next [-]

you'd be able to give them a novel problem and have them generalize from known concepts to solve it. here's an example:

1 write a specification for a language in natural language

2 write an example program

can you feed 1 into a model and have it produce a compiler for 2 that works as reliably as a classically built one?

I think that's a low bar that hasn't been approached yet. until then I don't see evidence of language models' ability to reason.

EliRivers 2 days ago | parent | next [-]

I'd accept that as a human kind of intelligence, but I'm really hoping that AGI would be a bit more general. That clever human thinking would be a subset of what it could do.

logicchains 2 days ago | parent | prev [-]

You could ask Gemini 2.5 to do that today and it's well within its capabilities, just as long as you also let it write and run unit tests, as a human developer would.

fusionadvocate 2 days ago | parent | prev | next [-]

AI will face the same limitations we face: availability of information and the non deterministic nature of the world.

psadri 2 days ago | parent | prev | next [-]

What do monkeys think about humans?

logicchains 2 days ago | parent | prev [-]

AGI isn't ASI; it's not supposed to be smarter than humans. The people who say AGI is far away are unscientific woo-mongers, because they never give a concrete, empirically measurable definition of AGI. The closest we have is Humanity's Last Exam, which LLMs are already well on the path to acing.

quonn 2 days ago | parent | next [-]

Consider this: Being born/trained in 1900 if that were possible and given a year to adapt to the world of 2025, how well would an LLM do on any test? Compare that to how a 15 years old human in the same situation would do.

EliRivers 2 days ago | parent | prev [-]

I'd expect it to be generalised, where we (and everything else we've ever met) are specialised. Our intelligence is shaped by our biology and our environment; the limitations on our thinking are themselves concepts the best of us can barely glimpse. Some kind of intelligence that inherently transcends its substrate.

What that would look like, how it would think, the kind of mental considerations it would have, I do not know. I do suspect that declaring something that thinks like us would have "general intelligence" to be a symptom of our limited thinking.