Remix.run Logo
Closi 19 hours ago

Just comes down to your own view of what AGI is, as it's not particularly well defined.

While a bit 'time-machiney' - I think if you took an LLM of today and showed it to someone 20 years ago, most people would probably say AGI has been achieved. If someone wrote a definition of AGI 20 years ago, we would probably have met that.

We have certainly blasted past some science-fiction examples of AI like Agnes from The Twilight Zone, which 20 years ago looked a bit silly, and now looks like a remarkable prediction of LLMs.

By todays definition of AGI we haven't met it yet, but eventually it comes down to 'I know it if I see it' - the problem with this definition is that it is polluted by what people have already seen.

nottorp 17 hours ago | parent | next [-]

> most people would probably say AGI has been achieved

Most people who took a look at a carefully crafted demo. I.e. the CEOs who keep pouring money down this hole.

If you actually use it you'll realize it's a tool, and not a particularly dependable tool unless you want to code what amounts to the React tutorial.

lcnmrn 11 hours ago | parent | next [-]

I built a Nostr web client without looking at code or touching the IDE with Gemini CLI: https://github.com/lucianmarin/subnostr

nottorp 9 hours ago | parent [-]

So it had a tutorial for that api and it reimplemented it

bebb 16 hours ago | parent | prev [-]

Depending on the task, the tool can, in effect, demonstrate more intelligence than most people.

We've just become accustomed to it now, and tend to focus more on the flaws than the progress.

bananaflag 19 hours ago | parent | prev | next [-]

> If someone wrote a definition of AGI 20 years ago, we would probably have met that.

No, as long as people can do work that a robot cannot do, we don't have AGI. That was always, if not the definition, at least implied by the definition.

I don't know why the meme of AGI being not well defined has had such success over the past few years.

bonplan23 18 hours ago | parent | next [-]

"Someone" literally did that (+/- 2 years): https://link.springer.com/book/10.1007/978-3-540-68677-4

I think it was supposed to be a more useful term than the earlier and more common "Strong AI". With regards to strong AI, there was a widely accepted definition - i.e. passing the Turing Test - and we are way past that point already: ( see https://arxiv.org/pdf/2503.23674 )

erfgh 13 hours ago | parent [-]

I have to challenge the paper authors' understanding of the Turing test. For an AI system to pass the Turing test its output needs to be indistinguishable from a human's. In other words, the rate of picking the AI system as human should be equal to the rate of picking the human. If in an experiment the AI system is picked at a rate higher than 50% it does not pass the Turing test (as the authors seem to believe) because another human can use this knowledge to conclude that the system being picked is not really human.

Also, I would go one step further and claim that to pass the Turing test an AI system should be indistinguishable from a human when judged by people trained in making such a distinction. I doubt that they used such people in the experiment.

I doubt that any AI system available today, or in the foreseeable future, can pass the test as I qualify it above.

CamperBob2 12 hours ago | parent [-]

People are constantly being fooled by bots in forums like Reddit and this one. That's good enough for me to consider the Turing test passed.

It also makes me consider it an inadequate test to begin with, since all classes of humans including domain experts can be fooled and have been in the past. The Turing test has always said more about the human participants than the machine.

Closi 19 hours ago | parent | prev [-]

Completely disagree - Your definition (in my opinion) is more aligned to the concept of Artificial Super Intelligence.

Surely the 'General Intelligence' definition has to be consistent between 'Artificial General Intelligence' and 'Human General Intelligence', and humans can be generally intelligent even if they can't solve calculus equations or protein folding problems. My definition of general intelligence is much lower than most - I think a dog is probably generally intelligent, although obviously in a different way (dogs are obviously better at learning how to run and catch a ball, and worse at programming python).

fc417fc802 17 hours ago | parent [-]

I do consider dogs to have "general intelligence" however despite that I have always (my entire life) considered AGI to imply human level intelligence. Not better, not worse, just human level.

It gets worse though. While one could claim that scoring equivalently on some benchmark indicates performance at the same level - and I'd likely agree - that's not what I take AGI to mean. Rather I take it to mean "equivalent to a human" so if it utterly fails at something we're good at such as driving a car through a construction zone during rush hour then I don't consider it to have met the bar of AGI even if it meets or exceeds us at other unrelated tasks. You have to be at least as general as a stock human to qualify as AGI in my books.

Now I may be but a single datapoint but I think there are a lot of people out there who feel similarly. You can see this a lot in popular culture with AGI (or often AI) being used to refer to autonomous humanoid robots portrayed as operating at or above a human level.

Related to all that, since you mention protein folding. I consider that to be a form of super intelligence as it is more or less inconceivable that an unaided human would ever be able to accomplish such a feat. So I consider alphafold to be both super intelligent and decidedly _not_ AGI. Make of that what you will.

docjay 11 hours ago | parent | next [-]

Pop culture has spent its entire existence conflating AGI and ‘Physical AI’, so much so that the collective realization that they’re entirely different is a relatively recent thing. Both of them were so far off in the future that the distinction wasn’t worth considering, until suddenly one of them is kinda maybe sorta roughly here now…ish.

Artificial General Intelligence says nothing about physical ability, but movies with the ‘intelligence’ part typically match it with equally futuristic biomechanics to make the movie more interesting. AGI = Skynet, Physical AI = Terminator. The latter will likely be the hardest part, not only because it requires the former first, but because you can’t just throw more watts at a stepper motor and get a ballet dancer.

That said, I’m confident that if I could throw zero noise and precise “human sensory” level sensor data at any of the top LLM models, and their output was equally coupled to a human arm with the same sensory feedback, that it would definitely outdo any current self-driving car implementation. The physical connection is the issue, and will be for a long time.

fc417fc802 10 hours ago | parent [-]

Agreed about the conflation. But that drives home that there isn't some historic commonly and widely accepted definition for AGI whose goal posts are being moved. What there was doesn't match the new developments and was also often quite flawed to begin with.

> LLM models, ... outdo any current self-driving car

How would an LLM handle computer vision? Are you implicitly including a second embedding model there? But I think that's still the wrong sort of vision data for precise control, at least in general.

How do you propose to handle the model hallucinating? What about losing its train of thought?

docjay an hour ago | parent [-]

True that there isn’t a firm definition for AGI, but that’s the fault of the “I”. We don’t have an objective definition of intelligence, and so we don’t have a means of measuring it either. I mean, odds are you’re the least intelligent paleoethnobotanist and cetacean bioacoustician I’ve ever met, but perhaps the most intelligent something_else. How do we measure that? How do we define it?

I was confusing in my previous message. Right now it would be terrible at driving a car, but I was saying that has more to do with the physical interface (camera, sensors, etc) than the ability of an LLM. The ‘intelligence’ part is better than the PyTorch image recognition attached to a servo they’re using now, how to attach that ‘intelligence’ to the physical world is the 50 year task. (To be clear: LLMs aren’t intelligent, smart, or any sense of the word and never will be. But they can sure replicate the effect better than current self-driving tech.)

Closi 10 hours ago | parent | prev [-]

I think your definition of it being 'human level' is sensible - definitely a lower bar to hit than 'as long as people can do work that a robot cannot do, we don't have AGI'.

There is certainly a lot road between current technology and driving a car through a construction zone during rush hour, particularly with the same amount of driving practice a human gets.

Personally I think there could be an AGI which couldn't drive a car, but has genuine sentience - an awareness of being alive, although not necessarily the exact human experience. Maybe this isn't AGI, which more implies problem-solving and thinking rather than sentience, but in my gut if we got something sentient but that couldn't drive a car, we would still be there if that makes sense?

fc417fc802 10 hours ago | parent [-]

In theory I see what you're saying. There are physical things an octopus could conceivably do that I never could on account of our physiology rather than our intelligence. So you can contrive an analogous scenario involving only the mind where something that is clearly an AGI is incapable of some specific task and thus falls short of my definition. This makes it clear that my definition is a heuristic rather than rigorous.

Nonetheless, it's difficult to imagine a scenario where something that is genuinely human level can't adapt in the field to a novel task such as driving a car. That sort of broad adaptability is exactly what the "general" in AGI is attempting to capture (imo).

Closi 6 hours ago | parent [-]

This is true, although maybe if an “AGI” invented us, it might say “It’s strange how these humans are so good at driving, but so bad at protein folding and playing Go”

Very abstract, but I think it’s important to remember that human intelligence also has jagged edges.

andy99 14 hours ago | parent | prev | next [-]

  I think if you took an LLM of today and showed it to someone 20 years ago, most people would probably say AGI has been achieved. 
I’ve got to disagree with this. All past pop-culture AI was sentient and self-motivated, it was human like in that it had it’s own goals and autonomy.

Current AI is a transcript generator. It can do smart stuff but it has no goals, it just responds with text when you prompt it. It feels like magic, even compared to 4-5 years ago, but it doesn’t feel like what was classically understood as AI, certainly by the public.

Somewhere marketers changed AGI to mean “does predefined tasks with human level accuracy” or the like. This is more like the definition of a good function approximator (how appropriate) instead of what people think (or thought) about when considering intelligence.

docjay 10 hours ago | parent | next [-]

The thing that blows my mind about language models isn't that they do what they do, it's that it's indistinguishable from what we do. We are a black box; nobody knows how we do what we do, or if we even do what we do because of a decision we made. But the funny thing is: if I can perfectly replicate a black box then you cannot say that what I'm doing isn't exactly what the black box is doing as well.

We can't measure goals, autonomy, or consciousness. We don't even have an objective measure of intelligence. Instead, since you probably look like me I think it's polite to assume you're conscious…that's about it. There’s literally no other measure. I mean, if I wanted to be a jerk, I could ask if you're conscious, but whether you say yes or no is proof enough that you are. If I'm curious about intelligence I can come up with a few dozen questions, out of a possible infinite number, and if you get those right I'll call you intelligent too. But if you get them wrong… well, I'll just give you a different set of questions; maybe accounting is more your thing than physics.

So, do you just respond with text when you’re promoted with input from your eyes or ears? You’ll instinctively say “No, I’m conscious and make my own decisions”, but that’s just a sequence of tokens with a high probability in response to that question.

Do you actually have goals, or did the system prompt of life tell you that in your culture, at this point in time, you should strive to achieve goals[] because that’s what gets positive feedback?

andy99 8 hours ago | parent [-]

Your argument makes no sense

docjay 5 hours ago | parent [-]

Well then keep working on it.

nextaccountic 13 hours ago | parent | prev [-]

> Current AI is a transcript generator. It can do smart stuff but it has no goals

That's probably not because of an inherent lack of capability, but because the companies that run AI products don't want to run autonomous intelligent systems like that

sixtyj 17 hours ago | parent | prev [-]

Charles Stross published Accelerando in 2005.

The book is a collection of nine short stories telling the tale of three generations of a family before, during, and after a technological singularity.