Remix.run Logo
kelnos 3 days ago

I disagree. We claim that biological humans have general intelligence because we are biased and arrogant, and experience hubris. I'm not saying we aren't generally intelligent, but a big part of believing we are is because not believing so would be psychologically and culturally disastrous.

I fully expect that, as our attempts at AGI become more and more sophisticated, there will be a long period where there are intensely polarizing arguments as to whether or not what we've built is AGI or not. This feels so obvious and self-evident to me that I can't imagine a world where we achieve anything approaching consensus on this quickly.

If we could come up with a widely-accepted definition of general intelligence, I think there'd be less argument, but it wouldn't preclude people from interpreting both the definition and its manifestation in different ways.

Davidzheng 3 days ago | parent | next [-]

I can say it. Humans are not "generally intelligent". We are intelligent in a distribution of environments which are similar enough to ones we are used to. There's no way to be intelligent with no priors on environment basically by information theory (you can make your environment to be adversarial to the learning efficiency in "intelligent" beings which comes from priors)

mindcrime 3 days ago | parent | prev | next [-]

We claim that biological humans have general intelligence because we are biased and arrogant, and experience hubris.

No, we say it because - in this context - we are the definition of general intelligence.

Approximately nobody talking about AGI takes the "G" to stand for "most general possible intelligence that could ever exist." All it means is "as general as an average human." So it doesn't matter if humans are "really general intelligence" or not, we are the benchmark being discussed here.

mindcrime 3 days ago | parent [-]

If you don't believe me, go back to the introduction of the term[1]:

By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be "conscious" or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.

It's pretty clear here that the notion of "artificial general intelligence" is being defined as relative to human intelligence.

Or see what Ben Goertzel - probably the one person most responsible for bringing the term into mainstream usage - had to say on the issue[2]:

“Artificial General Intelligence”, AGI for short, is a term adopted by some researchers to refer to their research field. Though not a precisely defined technical term, the term is used to stress the “general” nature of the desired capabilities of the systems being researched -- as compared to the bulk of mainstream Artificial Intelligence (AI) work, which focuses on systems with very specialized “intelligent” capabilities. While most existing AI projects aim at a certain aspect or application of intelligence, an AGI project aims at “intelligence” as a whole, which has many aspects, and can be used in various situations. There is a loose relationship between “general intelligence” as meant in the term AGI and the notion of “g-factor” in psychology [1]: the g-factor is an attempt to measure general intelligence, intelligence across various domains, in humans.

Note the reference to "general intelligence" as a contrast to specialized AI's (what people used to call "narrow AI" even though he doesn't use the term here). And the rest of that paragraph shows that the whole notion is clearly framed in terms of comparison to human intelligence.

That point is made even more clear when the paper goes on to say:

Modern learning theory has made clear that the only way to achieve maximally general problem-solving ability is to utilize infinite computing power. Intelligence given limited computational resources is always going to have limits to its generality. The human mind/brain, while possessing extremely general capability, is best at solving the types of problems which it has specialized circuitry to handle (e.g. face recognition, social learning, language learning;

Note that they chose to specifically use the more precise term "maximally general problem solving ability when referring to something beyond the range of human intelligence, and then continued to clearly show that the overall idea is - again - framed in terms of human intelligence.

One could also consult Marvin Minsky's words[3] from back around the founding of the overall field of "Artificial Intelligence" altogether:

“In from three to eight years, we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight.

Simply put, with a few exceptions, the vast majority of people working in this space simply take AGI to mean something approximately like "human like intelligence". That's all. No arrogance or hubris needed.

[1]: https://web.archive.org/web/20110529215447/http://www.foresi...

[2]: https://goertzel.org/agiri06/%255B1%255D%2520Introduction_No...

[3]: https://www.science.org/doi/10.1126/science.ado7069

3 days ago | parent | prev [-]
[deleted]