Remix.run Logo
__MatrixMan__ 2 days ago

What the hell is general intelligence anyway? People seem to think it means human-like intelligence, but I can't imagine we have any good reason to believe that our kinds of intelligence constitute all possible kinds of intelligence--which, from the words, must be what "general" intelligence means.

It seems like even if it's possible to achieve GI, artificial or otherwise, you'd never be able to know for sure that thats what you've done. It's not exactly "useful benchmark" material.

thomasahle 2 days ago | parent | next [-]

> What the hell is general intelligence anyway?

OpenAI used to define it as "a highly autonomous system that outperforms humans at most economically valuable work."

Now they used a Level 1-5 scale: https://briansolis.com/2024/08/ainsights-openai-defines-five...

So we can say AGI is "AI that can do the work of Organizations":

> These “Organizations” can manage and execute all functions of a business, surpassing traditional human-based operations in terms of efficiency and productivity. This stage represents the pinnacle of AI development, where AI can autonomously run complex organizational structures.

TheOtherHobbes 2 days ago | parent | next [-]

There's nothing general about AI-as-CEO.

That's the opposite of generality. It may well be the opposite of intelligence.

An intelligent system/individual reliably and efficiently produces competent, desirable, novel outcomes in some domain, avoiding failures that are incompetent, non-novel, and self-harming.

Traditional computing is very good at this for a tiny range of problems. You get efficient, very fast, accurate, repeatable automation for a certain small set of operation types. You don't get invention or novelty.

AGI will scale this reliably across all domains - business, law, politics, the arts, philosophy, economics, all kinds of engineering, human relationships. And others. With novelty.

LLMs are clearly a long way from this. They're unreliable, they're not good at novelty, and a lot of what they do isn't desirable.

They're barely in sight of human levels of achievement - not a high bar.

The current state of LLMs tells us more about how little we expect from human intelligence than about what AGI could be capable of.

Thrymr 2 days ago | parent | prev | next [-]

Apparently OpenAI now just defines it monetarily as "when we can make $100 billion from it." [0]

[0] https://gizmodo.com/leaked-documents-show-openai-has-a-very-...

olyjohn 2 days ago | parent [-]

That's what "economically valuable work" means.

2 days ago | parent | prev [-]
[deleted]
lupusreal 2 days ago | parent | prev | next [-]

The way some people confidently assert that we will never create AGI, I am convinced the term essentially means "machine with a soul" to them. It reeks of religiosity.

I guess if we exclude those, then it just means the computer is really good at doing the kind of things which humans do by thinking. Or maybe it's when the computer is better at it than humans and merely being as good as the average human isn't enough (implying that average humans don't have natural general intelligence? Seems weird.)

cmsj 2 days ago | parent [-]

When we say "the kind of things which humans do by thinking", we should really consider that in the long arc of history. We've bootstrapped ourselves from figuring out that flint is sharp when it breaks, to being able to do all of the things we do today. There was no external help, no pre-existing dataset trained into our brains, we just observed, experimented, learned and communicated.

That's general intelligence - the ability to explore a system you know nothing about (in our case, physics, chemistry and biology) and then interrogate and exploit it for your own purposes.

LLMs are an incredible human invention, but they aren't anything like what we are. They are born as the most knowledgeable things ever, but they die no smarter.

logicchains 2 days ago | parent | prev [-]

>you'd never be able to know for sure that thats what you've done.

Words mean what they're defined to mean. Talking about "general intelligence" without a clear definition is just woo, muddy thinking that achieves nothing. A fundamental tenet of the scientific method is that only testable claims are meaningful claims.