Remix.run Logo
jmward01 2 hours ago

I think we keep changing the goalposts on AGI. If you gave me CC in the 80's I would probably have called it 'alive' since it clearly passes the Turing test as I understood it then (I wouldn't have been able to distinguish it from a person for most conversations). Now every time it gets better we push that definition further and every crack we open to a chasm and declare that it isn't close. At the same time there are a lot of people I would suspect of being bots based on how they act and respond and a lot of bots I know are bots mainly because they answer too well.

Maybe we need to start thinking less about building tests for definitively calling an LLM AGI and instead deciding when we can't tell humans aren't LLMs for declaring AGI is here.

sho_hn 2 hours ago | parent | next [-]

> I think we keep changing the goalposts on AGI

Isn't that exactly what you would expect to happen as we learn more about the nature and inner workings of intelligence and refine our expectations?

There's no reason to rest our case with the Turing test.

I hear the "shifting goalposts" riposte a lot, but then it would be very unexciting to freeze our ambitions.

At least in an academic sense, what LLMs aren't is just as interesting as what they are.

breezybottom an hour ago | parent | next [-]

I think the advancement in AI over the last four years has greatly exceeded the advancement in understanding the workings of human intelligence. What paradigm shift has there been recently in that field?

smcg an hour ago | parent [-]

What have we learned that isn't in my textbook from the 90s?

an hour ago | parent | next [-]
[deleted]
echelon an hour ago | parent | prev [-]

> What have we learned that isn't in my textbook from the 90s?

Does it matter?

We can do countless things people in the 90's would think was black magic.

If I showed the kid version of myself what I can do with Opus or Nano Banana or Seedance, let alone broadband and smartphones, I think I'd feel we were living in the Star Trek future. The fact that we can have "conversations" with AI is wild. That we can make movies and websites and games. It's incredible.

And there does not seem to be a limit yet.

charcircuit an hour ago | parent | prev [-]

I would agree with you if we were talking about trying to replicate some form of general intelligence, but we are talking about creating artificial intelligence.

_russross an hour ago | parent | prev | next [-]

Turing himself argued that trying to measure if a computer is intelligent is a fool's errand because it is so difficult to pin down definitions. He proposed what we call the "Turing test" as a knowable, measurable alternative. The first paragraph of his paper reads:

> I propose to consider the question, "Can machines think?" This should begin > with definitions of the meaning of the terms "machine" and "think." The > definitions might be framed so as to reflect so far as possible the normal use > of the words, but this attitude is dangerous, If the meaning of the words > "machine" and "think" are to be found by examining how they are commonly used > it is difficult to escape the conclusion that the meaning and the answer to the > question, "Can machines think?" is to be sought in a statistical survey such as > a Gallup poll. But this is absurd. Instead of attempting such a definition I > shall replace the question by another, which is closely related to it and is > expressed in relatively unambiguous words.

Many people who want to argue about AGI and its relation to the Turing test would do well to read Turing's own arguments.

redox99 35 minutes ago | parent [-]

The Turing test ended up being kind of a flop. We basically passed it and nobody cared. That's because the turing test is about whether a machine can fool a human, not about its intelligent capabilities per se.

anthonyrstevens 19 minutes ago | parent [-]

No, it's because certain people moved the goal posts. Nothing an LLM does or will do will make them belive that it's "intelligent" because they have a mental model of "intelligence" that is more religious than empirical.

sn0wr8ven 2 hours ago | parent | prev | next [-]

I don't think the goalpost has been shifted for AGI or the definition of AGI that is used by these corporations. It's just they broke it down to stages to claim AGI achieved. It was always a model or system that surpasses human capabilities at most tasks/being able to replace a human worker. The big companies broke it down to AGI stage 1, stage 2, etc to be able to say they achieved AGI.

The Turing Test/Imitation Game is not a good benchmark for AGI. It is a linguistics test only. Many chatbots even before LLMs can pass the Turing Test to a certain degree.

Regardless, the goalpost hasn't shifted. Replacing human workforce is the ultimate end goal. That's why there's investors. The investors are not pouring billions to pass the Turing Test.

turtlesdown11 an hour ago | parent [-]

AGI moved from a technical goal to a marketing term

zug_zug an hour ago | parent | prev | next [-]

I don't think so... I think most of the sci-fi I grew up reading presented AGI that could reason better than humans could, like make a plan and carry it out.

Like do people not know what word "general" means? It means not limited to any subset of capabilities -- so that means it can teach itself to do anything that can be learned. Like start a business. AI today can't really learn from its experiences at all.

Zambyte an hour ago | parent | prev | next [-]

Related: https://en.wikipedia.org/wiki/AI_effect

The truth is, we have had AGI for years now. We even have artificial super intelligence - we have software systems that are more intelligent than any human. Some humans might have an extremely narrow subject that they are more intelligent than any AI system, but the people on that list are vanishing small.

AI hasn't met sci-fi expectations, and that's a marketing opportunity. That's all it is.

baq an hour ago | parent | next [-]

AGI in the common man's world model is ASI in the AI researcher's definitions, i.e. something obviously smarter at anything and everything you could ask it for regardless of how good of an expert you are in any domain.

also, I'm pretty sure some people will move goalposts further even then.

fragmede 29 minutes ago | parent | prev [-]

Hasn't met your sci-fi expectations, maybe. I pull a computer out of my pocket, and talk with it. Sure, I gets tripped up here and there, but take a step back, holy shit that's freaking amazing! I don't have a flying car or transparent aluminum, and society has its share of issues right now, but my car drives itself. Coming from the 90's, I think living in the sci-fi future! (Only question is, which one.)

pron an hour ago | parent | prev | next [-]

The Turing test pits a human against a machine, each trying to convince a human questioner that the other is the machine. If the machine knows how humans generally behave, for a proper test, the human contestant should know how the machine behaves. I think that this YouTube channel clearly shows that none of today's models pass the Turing test: https://www.youtube.com/@FatherPhi

lesuorac an hour ago | parent | prev | next [-]

> Maybe we need to start thinking less about building tests for definitively calling an LLM AGI and instead deciding when we can't tell humans aren't LLMs for declaring AGI is here.

If you've never read the original paper [1] I recommend that you do so. We're long past the point of some human can't determine if X was done by man or machine.

[1]: https://courses.cs.umbc.edu/471/papers/turing.pdf

applfanboysbgon 32 minutes ago | parent | prev | next [-]

People thought Eliza was alive too in the 60s. AGI is not determined by how ignorant, uninformed humans view a technology they don't understand. That is the single dumbest criterion you could come up with for defining it.

Regarding shifting goalposts, you are suggesting the goalposts are being moved further away, but it's the exact opposite. The goalposts are being moved closer and closer. Someone from the 50s would have had the expectation that artificial intelligence ise something recognisable as essentially equivalent to human intelligence, just in a machine. Artificial intelligence in old sci-fi looked nothing like Claude Code. The definition has since been watered down again and again and again and again so that anything and everything a computer does is artificial intelligence. We might as well call a calculator AGI at this point.

zendist 36 minutes ago | parent | prev | next [-]

The goal post keeps moving because LLM hypeists keep saying LLMs are "close" to AGI (or even are, already). Any reasonably intelligent individual that knows anything about LLMs obviously rejects those claims, but the rest of the world doesn't.

An AGI would not have problems reading an analog clock. Or rather, it would not have a problem realizing it had a problem reading it, and would try to learn how to do it.

An AGI is not whatever (sophisticated) statistical model is hot this week.

Just my take.

redox99 31 minutes ago | parent [-]

Vision is still much weaker than text for LLMs. So you could argue we already have AGI for text but not vision inputs, or you could argue AGI requires being human level at text vision and sound.

arkadiytehgraet an hour ago | parent | prev | next [-]

Sure, in the 80s after interacting with CC 1 time you would call it 'alive'. After having interacted with it for 5-10 minutes you would clearly see that it is as far from AGI as something more mundane as C compiler is.

ex-aws-dude 27 minutes ago | parent | prev | next [-]

Maybe moving the goalposts is how we find the definition?

andrepd an hour ago | parent | prev [-]

By that measure Eliza might pass the turing test too. It just shows it's far from being a though-terminating argument by itself.