Remix.run Logo
AndrewKemendo 8 hours ago

This debate is exhausting because there's no coherent definition of AGI that people agree on.

I made a google form question for collecting AGI definitions cause I don't see anyone else doing it and I find it infinitely frustrating the range of definitions for this concept:

https://docs.google.com/forms/d/e/1FAIpQLScDF5_CMSjHZDDexHkc...

My concern is that people never get focused enough to care to define it - seems like the most likely case.

johnb231 3 hours ago | parent | next [-]

The Wikipedia article on AGI explains it well enough.

Researchers at Google have proposed a classification scheme with multiple levels of AGI. There are different opinions in the research community.

https://arxiv.org/abs/2311.02462

bigyabai 7 hours ago | parent | prev | next [-]

It is a marketing term. That's it. Trying to exhaustively define what AGI is or could be is like trying to explain what a Happy Meal is. At it's core, the Happy Meal was not invented to revolutionize food eating. It puts an attractive label on some mediocre food, a title that exists for the purpose of advertisement.

There is no point collecting definitions for AGI, it was not conceived as a description for something novel or provably existent. It is "Happy Meal marketing" but aimed for adults.

AndrewKemendo 4 hours ago | parent | next [-]

That’s historically inaccurate

My masters thesis advisor Ben Goertzel popularized the term and has been hosting the AGI conference since 2008:

https://agi-conference.org/

https://goertzel.org/agiri06/%5B1%5D%20Introduction_Nov15_PW...

I had lunch with Yoshua Bengio at AGI 2014 and it was most of the conversation that day

HarHarVeryFunny 4 hours ago | parent | prev | next [-]

The name AGI (i.e. generalist AI) was originally intended to contrast with narrow AI which is only capable of one, or a few, specific narrow skills. A narrow AI might be able to play chess, or distinguish 20 breeds of dog, but wouldn't be able to play tic tac toe because it wasn't built for that. AGI would be able to learn to do anything, within reason.

The term AGI is obviously used very loosely with little agreement to it's precise definition, but I think a lot of people take it to mean not only generality, but specifically human-level generality, and human-level ability to learn from experience and solve problems.

A large part of the problem with AGI being poorly defined is that intelligence itself is poorly defined. Even if we choose to define AGI as meaning human-level intelligence, what does THAT mean? I think there is a simple reductionist definition of intelligence (as the word is used to refer to human/animal intelligence), but ultimately the meaning of words are derived from their usage, and the word "intelligence" is used in 100 different ways ...

johnb231 3 hours ago | parent | prev [-]

Generalization is a formal concept in machine learning and is measurable.

mvkel 8 hours ago | parent | prev [-]

It doesn't really seem like there's much utility in defining it. It's like defining "heaven."

It's an ideal that some people believe in, and we're perpetually marching towards it

theptip 7 hours ago | parent | next [-]

No, it’s never going to be precise but it’s important to have a good rough definition.

Can we just use Morris et al and move on with our lives?

Position: Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/html/2311.02462v4

There are generational policy and societal shifts that need to be addressed somewhere around true Competent AGI (50% of knowledge work tasks automatable). Just like climate change, we need a shared lexicon to refer to this continuum. You can argue for different values of X but the crucial point is if X% of knowledge work is automated within a decade, then there are obvious risks we need to think about.

So much of the discourse is stuck at “we will never get to X=99” when we could agree to disagree on that and move on to considering the x=25 case. Or predict our timelines for X and then actually be held accountable for our falsifiable predictions, instead of the current vide based discussions.

7 hours ago | parent | prev [-]
[deleted]