Remix.run Logo
kypro 3 hours ago

It seems to me by most classical definitions we've basically already reached AGI.

If I were to show Gemini 3 Pro to anyone in tech 10 years ago they would probably say Gemini 3 is an AGI, even if they acknowledged there was some limitations there.

The definition has moved so much that I'm not convinced that even if we see further breakthroughs over the next 10 years people will say we've finally reached AGI because even at that point it's probable there might still be 0.5% of tasks it struggles to compete with humans on. And we're going to have similar endless debates about ASI and the consciousness of AI.

I think all that matters really is utility of AI systems broadly within society. While a self-driving car may not be an AGI, it will displace jobs and fundamentally change society.

The achievement of some technical definition of AGI on the other hand is probably not all that relevant. Even if goal posts stop moving from today and advancements are made such that we finally get 51% of experts agreeing that AGI has been reached there could still be 49% of expert who argue that it hasn't. On the other hand, one will be confused about whether their job has been replaced by an AI system.

I'm sorry - I know this is a bit of a meta comment. I do broadly agree with the article. I just struggle to see why anyone cares unless hitting that 51/49% threshold in opinion on AGI correlates to something tangible.

Shoetp an hour ago | parent [-]

LLMs do NOT have intelligence. Achieving AGI would mean solving self-driving cars, replacing programmers, scientist, etc. All things LLMs are currently unable to do in a way that replaces humans.

There's a huge gap between what Gemini 3 can do and what AGI promises to do. It's not just a minor "technical definition".