Remix.run Logo
throwaway31131 3 days ago

We would also need a definition of AGI that is provable or disprovable.

We don’t even have a workable definition, never mind a machine.

thfuran 3 days ago | parent | next [-]

Only if we need to classify things near the boundary. If we make something that’s better at every test that we can devise than any human we can find, I think we can say that no reasonable definition of AGI would exclude it without actually arriving at a definition.

tshaddox 3 days ago | parent | prev [-]

We don’t need such a definition of general intelligence to conclude that biological humans have it, so I’m not sure why we’d such a definition for AGI.

kelnos 3 days ago | parent | next [-]

I disagree. We claim that biological humans have general intelligence because we are biased and arrogant, and experience hubris. I'm not saying we aren't generally intelligent, but a big part of believing we are is because not believing so would be psychologically and culturally disastrous.

I fully expect that, as our attempts at AGI become more and more sophisticated, there will be a long period where there are intensely polarizing arguments as to whether or not what we've built is AGI or not. This feels so obvious and self-evident to me that I can't imagine a world where we achieve anything approaching consensus on this quickly.

If we could come up with a widely-accepted definition of general intelligence, I think there'd be less argument, but it wouldn't preclude people from interpreting both the definition and its manifestation in different ways.

Davidzheng 3 days ago | parent | next [-]

I can say it. Humans are not "generally intelligent". We are intelligent in a distribution of environments which are similar enough to ones we are used to. There's no way to be intelligent with no priors on environment basically by information theory (you can make your environment to be adversarial to the learning efficiency in "intelligent" beings which comes from priors)

mindcrime 3 days ago | parent | prev | next [-]

We claim that biological humans have general intelligence because we are biased and arrogant, and experience hubris.

No, we say it because - in this context - we are the definition of general intelligence.

Approximately nobody talking about AGI takes the "G" to stand for "most general possible intelligence that could ever exist." All it means is "as general as an average human." So it doesn't matter if humans are "really general intelligence" or not, we are the benchmark being discussed here.

mindcrime 3 days ago | parent [-]

If you don't believe me, go back to the introduction of the term[1]:

By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be "conscious" or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.

It's pretty clear here that the notion of "artificial general intelligence" is being defined as relative to human intelligence.

Or see what Ben Goertzel - probably the one person most responsible for bringing the term into mainstream usage - had to say on the issue[2]:

“Artificial General Intelligence”, AGI for short, is a term adopted by some researchers to refer to their research field. Though not a precisely defined technical term, the term is used to stress the “general” nature of the desired capabilities of the systems being researched -- as compared to the bulk of mainstream Artificial Intelligence (AI) work, which focuses on systems with very specialized “intelligent” capabilities. While most existing AI projects aim at a certain aspect or application of intelligence, an AGI project aims at “intelligence” as a whole, which has many aspects, and can be used in various situations. There is a loose relationship between “general intelligence” as meant in the term AGI and the notion of “g-factor” in psychology [1]: the g-factor is an attempt to measure general intelligence, intelligence across various domains, in humans.

Note the reference to "general intelligence" as a contrast to specialized AI's (what people used to call "narrow AI" even though he doesn't use the term here). And the rest of that paragraph shows that the whole notion is clearly framed in terms of comparison to human intelligence.

That point is made even more clear when the paper goes on to say:

Modern learning theory has made clear that the only way to achieve maximally general problem-solving ability is to utilize infinite computing power. Intelligence given limited computational resources is always going to have limits to its generality. The human mind/brain, while possessing extremely general capability, is best at solving the types of problems which it has specialized circuitry to handle (e.g. face recognition, social learning, language learning;

Note that they chose to specifically use the more precise term "maximally general problem solving ability when referring to something beyond the range of human intelligence, and then continued to clearly show that the overall idea is - again - framed in terms of human intelligence.

One could also consult Marvin Minsky's words[3] from back around the founding of the overall field of "Artificial Intelligence" altogether:

“In from three to eight years, we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight.

Simply put, with a few exceptions, the vast majority of people working in this space simply take AGI to mean something approximately like "human like intelligence". That's all. No arrogance or hubris needed.

[1]: https://web.archive.org/web/20110529215447/http://www.foresi...

[2]: https://goertzel.org/agiri06/%255B1%255D%2520Introduction_No...

[3]: https://www.science.org/doi/10.1126/science.ado7069

3 days ago | parent | prev [-]
[deleted]
bastawhiz 3 days ago | parent | prev [-]

Well general intelligence in humans already exists, whereas general intelligence doesn't yet exist in machines. How do we know when we have it? You can't even simply compare it to humans and ask "is it able to do the same things?" because your answer depends on what you define those things to be. Surely you wouldn't say that someone who can't remember names or navigate without GPS lacks general intelligence, so it's necessary to define what criteria are absolutely required.

tshaddox 3 days ago | parent | next [-]

> You can't even simply compare it to humans and ask "is it able to do the same things?" because your answer depends on what you define those things to be.

Right, but you can’t compare two different humans either. You don’t test each new human to see if they have it. Somehow we conclude that humans have it without doing either of those things.

Jensson 3 days ago | parent [-]

> You don’t test each new human to see if they have it

We do, its called school and we label some humans with different learning disabilities. Some of those learning disabilities are grave enough that they can't learn to do tasks we expect humans to be able to learn, such humans can be argued to not posses the general intelligence we expect from humans.

Interacting with an LLM today is like interacting with an Alzheimer patient, they can do things they already learned well but poke at it and it all falls apart and they start repeating themselves, they can't learn.

tshaddox 2 days ago | parent [-]

Yes, there are diseases, injuries, etc. which can impair a human’s cognitive abilities. Sometimes those impairments are so severe that we don’t consider the human to be intelligent (or even alive!). But note that we still make this distinction without anything close to a rigorous formal definition of general intelligence.

jibal 3 days ago | parent | prev [-]

How do we know when a newborn has achieved general intelligence? We don't need a definition amenable to proof.

Jensson 3 days ago | parent | next [-]

Its a near clone of a model that already has it, we don't need to prove it has general intelligence we just assume it does because most do have it.

jibal 2 days ago | parent | prev [-]

P.S. The response is just an evasion.