Remix.run Logo
mindcrime a day ago

Terry Tao is a genius, and I am not. So I probably have no standing to claim to disagree with him. But I find this post less than fulfilling.

For starters, I think we can rightly ask what it means to say "genuine artificial general intelligence", as opposed to just "artificial general intelligence". Actually, I think it's fair to ask what "genuine artificial" $ANYTHING would be.

I suspect that what he means is something like "artificial intelligence, but that works just like human intelligence". Something like that seems to be what a lot of people are saying when they talk about AI and make claims like "that's not real AI". But for myself, I reject the notion that we need "genuine artificial general intelligence" that works like human intelligence in order to say we have artificial general intelligence. Human intelligence is a nice existence proof that some sort of "general intelligence" is possible, and a nice example to model after, but the marquee sign does say artificial at the end of the day.

Beyond that... I know, I know - it's the oldest cliche in the world, but I will fall back on it because it's still valid, no matter how trite. We don't say "airplanes don't really fly" because they don't use the exact same mechanism as birds. And I don't see any reason to say that an AI system isn't "really intelligent" if it doesn't use the same mechanism as human.

Now maybe I'm wrong and Terry meant something altogether different, and all of this is moot. But it felt worth writing this out, because I feel like a lot of commenters on this subject engage in a line of thinking like what is described above, and I think it's a poor way of viewing the issue no matter who is doing it.

npinsker 20 hours ago | parent | next [-]

> I suspect that what he means is something like "artificial intelligence, but that works just like human intelligence".

I think he means "something that can discover new areas of mathematics".

metalcrow 6 hours ago | parent | next [-]

In that case, I'm afraid many people, myself included, would not be describable as "general intelligences"!

fl7305 5 hours ago | parent | prev | next [-]

> "something that can discover new areas of mathematics".

How many software engineers with a good math education can do this?

mindcrime 20 hours ago | parent | prev | next [-]

Very reasonable, given his background!

That does seem awfully specific though, in the context of talking about "general" intelligence. But I suppose it could rightly be argued that any intelligence capable of "discovering new areas of mathematics" would inherently need to be fairly general.

themafia 19 hours ago | parent [-]

> That does seem awfully specific though

It's one of a large set of attributes you would expect in something called "AGI."

throw310822 4 hours ago | parent [-]

Then I don't get the distinction between AGI and superintelligence. Is there one?

mindcrime 3 hours ago | parent | next [-]

I agree with /u/AnimalMuppet, FWIW. As long as I've been doing this stuff (and I've been doing it for quite some time) AGI has been interpreted (somewhat loosely) as something like "Intelligence equivalent to an average human adult" or just "human level intelligence". But as /u/AnimalMuppet points out, there's quite a bit of variance to human intelligence, and nobody ever really specified in detail exactly which "human intelligence" AGI was meant to correspond to.

SuperIntelligence (or ASI), OTOH, has - so far as I can recall - always been even more loosely specified, and translates roughly to "an intelligence beyond any human intelligence".

Another term you might hear, although not as frequently, is "Universal Artificial Intelligence". This comes mostly from the work of Marcus Hutter[1] and means something approximately like "an intelligence that can solve any problem that can, in principle, be solved".

[1]: https://www.hutter1.net/ai/uaibook.htm

AnimalMuppet 3 hours ago | parent | prev [-]

AGI is human-level. (What human level is a question. High school? College graduate? PhD? Terrence Tao?)

Superintelligence is smarter than Terrence Tao, or any other human.

dr_dshiv 20 hours ago | parent | prev [-]

I’d love to take that bet

catoc 20 hours ago | parent | prev | next [-]

I interpret “artificial” in “artificial general intelligence” as “non-biological”.

So in Tao’s statement I interpret “genuine” not as an adverb modifying the “artificial” adjective but as an attributive adjective modifying the noun “intelligence”, describing its quality… “genuine intelligence that is non-biological in nature”

mindcrime 20 hours ago | parent [-]

So in Tao’s statement I interpret “genuine” not as an adverb modifying the “artificial” adjective but as an attributive adjective modifying the noun “intelligence”, describing its quality… “genuine intelligence that is non-biological in nature”

That's definitely possible. But it seems redundant to phrase it that way. That is to say, the goal (the end goal anyway) of the AI enterprise has always been, at least as I've always understood it, to make "genuine intelligence that is non-biological in nature". That said, Terry is a mathematician, not an "AI person" so maybe it makes more sense when you look at it from that perspective. I've been immersed in AI stuff for 35+ years, so I may have developed a bit of myopia in some regards.

catoc 19 hours ago | parent [-]

I agree, it’s redundant. To us humans - to me at least - intelligence is always general (calculator: not; chimpansee: a little), so “general intelligence” can also already be considered redundant. Using “genuine” is more redundancy being heaped on (with the assumed goal of making a distinction between “genuine” AGI and tools that appear smart in limited domains)

scellus 19 hours ago | parent | prev | next [-]

I find it odd that the post above is downvoted to grey, feels like some sort of latent war of viewpoints going on, like below some other AI posts. (Although these misvotes are usually fixed when the US wakes up.)

The point above is valid. I'd like to deconstruct the concept of intelligence even more. What humans are able to do is a relatively artificial collection of skills a physical and social organism needs. The so highly valued intelligence around math etc. is a corner case of those abilities.

There's no reason to think that human mathematical intelligence is unique by its structure, an isolated well-defined skill. Artificial systems are likely to be able to do much more, maybe not exactly the same peak ability, but adjacent ones, many of which will be superhuman and augmentative to what humans do. This will likely include "new math" in some sense too.

omnimus 18 hours ago | parent [-]

What everybody is looking for is imagination and invention. Current AI systems can give best guess statistical answer from dataset the've been fed. It is always compression.

The problem and what most people intuitively understand is that this compression is not enough. There is something more going on because people can come up with novel ideas/solutions and whats more important they can judge and figure out if the solution will work. So even if the core of the idea is “compressed” or “mixed” from past knowledge there is some other process going on that leads to the important part of invention-progress.

That is why people hate the term AI because it is just partial capability of “inteligence” or it might even be complete illusion of inteligence that is nowhere close what people would expect.

fl7305 5 hours ago | parent | next [-]

> Current AI systems can give best guess statistical answer from dataset the've been fed.

Counterpoint: ChatGPT came up with the new idiom "The confetti has left the cannon"

in-silico 15 hours ago | parent | prev [-]

> Current AI systems can give best guess statistical answer from dataset the've been fed.

What about reinforcement learning? RL models don't train on an existing dataset, they try their own solutions and learn from feedback.

RL models can definitely "invent" new things. Here's an example where they design novel molecules that bind with a protein: https://academic.oup.com/bioinformatics/article/39/4/btad157...

omnimus 14 hours ago | parent [-]

Finding variations in constrained haystack with measurable defined results is what machine learning has always been good at. Tracing most efficient Trackmania route is impressive and the resulting route might be original as in human would never come up with it. But is it actually novel in creative, critical way? Isn't it simply computational brute force? How big that force would have to be in physical or less constrained world?

enraged_camel 20 hours ago | parent | prev [-]

The airplane analogy is a good one. Ultimately, if it quacks like a duck and walks like a duck, does it really matter if it’s a real duck or an artificial one? Perhaps only if something tries to eat it, or another duck tries to mate with it. In most other contexts though it could be a valid replacement.

clort 19 hours ago | parent [-]

Just out of interest though, can you suggest some of these other contexts where you might want a valid replacement for a duck that looked like one, walked like one and quacked like one but was not one?

alex43578 19 hours ago | parent [-]

Decoy for duck hunting?

omnimus 19 hours ago | parent [-]

Are you suggesting LLMs are decoy for investor hunting?

heresie-dabord 17 hours ago | parent [-]

In the same sly vein of humour, the first rule of Money Club is to never admit that the duck may be lame.