Remix.run Logo
seanalltogether 7 days ago

Do we have a reasonable definition for what intelligence is? Is it like defining porn, you just know it when you see it?

smnrchrds 5 days ago | parent | next [-]

OpenAI defines AGI as a "highly autonomous system that outperforms humans at most economically valuable work" [0]. It may not be the most satisfying definition, but it is practical and a good goal to aim for if you are an AI company.

[0] https://openai.com/our-structure/

AndrewDucker 7 days ago | parent | prev | next [-]

My personal definition is "The ability to form models from observations and extrapolate from them."

LLMs are great at forming models of language from observations of language and extrapolating language constructs from them. But to get general intelligence we're going to have to let an AI build their models from direct measurements of reality.

daveguy 7 days ago | parent [-]

> LLMs are great at forming models of language

They really aren't even great at forming models of language. They are a single model of language. They don't build models, much less use those models. See, for example, ARC-AGI 1 and 2. They only performed ARC 1 decently [0] with additional training, and are failing miserably on ARC 2. That's not even getting to ARC 3.

[0] https://arcprize.org/blog/oai-o3-pub-breakthrough

> Note on "tuned": OpenAI shared they trained the o3 we tested on 75% of the Public Training set. They have not shared more details. We have not yet tested the ARC-untrained model to understand how much of the performance is due to ARC-AGI data.

... Clearly not able to reason about the problems without additional training. And no indication that the additional training didn't include some feature extraction, scaffolding, RLHF, etc created by human intelligence. Impressive that fine tuning can get >85%, but it's still additional human directed training and not self contained intelligence at the level of performance reported. The blog was very generous making the undefined "fine tuning" a footnote and praising the results as if they were directly from the model that would have cost > $65,000 to run.

Edit: to be clear, I understand LLMs are a huge leap forward in AI research and possibly the first models that can provide useful results across multiple domains without being retrained. But they're still not creating their own models, even of language.

alan-crowe 7 days ago | parent | prev | next [-]

LLMs have demonstrated that "intelligence" is a broad umbrella term that covers a variety of very different things.

Think about this story https://news.ycombinator.com/item?id=44845442

Med-Gemini is clearly intelligent, but equally clearly it is an inhuman intelligence with different failure modes from human intelligence.

If we say Med-Gemini is not intelligent, we will end up having to concede that actually it is intelligent. And the danger of this concession is that we will under-estimate how different it is from human intelligence and then get caught out by inhuman failures.

pan69 7 days ago | parent | prev | next [-]

> Is it like defining porn

I guess when it comes to the definition of intelligence, just like porn, different people have different levels of tolerance.

7 days ago | parent | prev | next [-]
[deleted]
erikerikson 7 days ago | parent | prev [-]

One of my favorites is efficient cross domain maximization

optimalsolver 7 days ago | parent [-]

Efficient, cross-domain optimization.

I believe that’s Eliezer Yudkowsky’s definition.

erikerikson 7 days ago | parent [-]

I did initially encounter it on LessWrong and modified it slightly according to my preference. Did he coin the term? There are a lot of ideas (not inappropriately) presented without attribution in that context.

optimalsolver 7 days ago | parent [-]

As far as I know, it’s his own term.