Remix.run Logo
xyzzy123 3 days ago

Am I the only one who feels that Claude Code is what they would have imagined basic AGI to be like 10 years ago?

It can plan and take actions towards arbitrary goals in a wide variety of mostly text-based domains. It can maintain basic "memory" in text files. It's not smart enough to work on a long time horizon yet, it's not embodied, and it has big gaps in understanding.

But this is basically what I would have expected v1 to look like.

kelnos 3 days ago | parent | next [-]

> Am I the only one who feels that Claude Code is what they would have imagined basic AGI to be like 10 years ago?

That wouldn't have occurred to me, to be honest. To me, AGI is Data from Star Trek. Or at the very least, Arnold Schwarzenegger's character from The Terminator.

I'm not sure that I'd make sentience a hard requirement for AGI, but I think my general mental fantasy of AGI even includes sentience.

Claude Code is amazing, but I would never mistake it for AGI.

buu700 3 days ago | parent | next [-]

I would categorize sentient AGI as artificial consciousness[1], but I don't see an obvious reason AGI inherently must be conscious or sentient. (In terms of near-term economic value, non-sentient AGI seems like a more useful invention.)

For me, AGI is an AI that I could assign an arbitrarily complex project, and given sufficient compute and permissions, it would succeed at the task as reliably as a competent C-suite human executive. For example, it could accept and execute on instructions to acquire real estate that matches certain requirements, request approvals from the purchasing and legal departments as required, handle government communication and filings as required, construct a widget factory on the property using a fleet of robots, and operate the factory on an ongoing basis while ensuring reliable widget deliveries to distribution partners. Current agentic coding certainly feels like magic, but it's still not that.

1: https://en.wikipedia.org/wiki/Artificial_consciousness

ACCount37 2 days ago | parent [-]

"Consciousness" and "sentience" are terms mired in philosophical bullshit. We do not have an operational definition of either.

We have no agreement on what either term really means, and we definitely don't have a test that could be administered to conclusively confirm or rule out "consciousness" or "sentience" in something inhuman. We don't even know for sure if all humans are conscious.

What we really have is task specific performance metrics. This generation of AIs is already in the valley between "average human" and "human expert" on many tasks. And the performance of frontier systems keeps improving.

amanaplanacanal 2 days ago | parent [-]

"Consciousness" seems pretty obvious. The ability to experience qualia. I do it, you do it, my dog does it. I suspect all mammals do it, and I suspect birds do too. There is no evidence any computer program does anything like it.

It's "intelligence" I can't define.

ACCount37 2 days ago | parent [-]

Oh, so simple. Go measure it then.

The definition of "featherless biped" might have more practical merit, because you can at least check for feathers and count limbs touching the ground in a mostly reliable fashion.

We have no way to "check for qualia" at all. For all we know, an ECU in a year 2002 Toyota Hilux has it, but 10% of all humans don't.

amanaplanacanal 2 days ago | parent [-]

Plenty of things are real that can't be measured, including many physical sensations and emotions.

I won't say they are impossible to ever be measured, but we currently have no idea how.

ACCount37 2 days ago | parent [-]

If you can't measure it and can't compare it, then for all practical purposes, it does not exist.

"Consciousness" might as well not be real. The only real and measurable thing is capabilities.

amanaplanacanal 2 days ago | parent [-]

Oof. Tell chronic pain patients that their pain doesn't exist.

I guess depression doesn't exist either. Or love.

adastra22 3 days ago | parent | prev [-]

I would love for you to define AGI in such a way as for that to make sense.

I presuppose that you actually mean ASI as a starting point, and that is being charitable that it isn’t just pattern matching to questionable sci-fi.

martinald 3 days ago | parent | prev | next [-]

Totally agree. It even (usually) gets subtle meanings from my often hastily written prompts to fix something.

What really occurs to me is that there is still so much can be done to leverage LLMs with tooling. Just small things in Claude Code (plan mode for example) make the system work so much better than (eg) the update from Sonnet 3.5 to 4.0 in my eyes.

zdragnar 3 days ago | parent | prev | next [-]

Claude code is neither sentient nor sapient.

I suspect most people envision AGI as at least having sentience. To borrow from Star Trek, the Enterprise's main computer is not at the level of AGI, but Data is.

The biggest thing that is missing (IMHO) is a discrete identity and notion of self. It'll readily assume a role given in a prompt, but lacks any permanence.

bitwize 2 days ago | parent | next [-]

The analogy I like to use is from the fictional universe of Mass Effect, which distinguished between VI (Virtual Intelligence), which is a conversational interface over some database or information service (often with a holographic avatar of a human, asari, or other sentient being); and AI, which is sentient and smart enough to be considered a person in its own right. We've just barely begun to construct VIs, and they're not particularly good or reliable ones.

One thing I like about the Mass Effect universe is the depiction of the geth, which qualify as AI. Each geth unit is not run by a singular intelligent program, but rather a collection of thousands of daemons, each of which makes some small component of the robot's decisions on its own, but together they add up to a collective consciousness. When you look at how actual modern robotics platforms (such as ROS) are designed, with many processes responsible for sensors and actuators communicating across a common bus, you can see the geth as sort of an extrapolation of that idea.

atleastoptimal 3 days ago | parent | prev | next [-]

Any claim of sentience is neither provable nor falsifiable. Caring about its definition has nothing to do with capabilities.

furyofantares 3 days ago | parent | prev | next [-]

> I suspect most people envision AGI as at least having sentience

I certainly don't. It could be that's necessary but I don't know of any good arguments for (or against) it.

dataviz1000 3 days ago | parent | prev | next [-]

Student: How do I know I exist?

Philosophy Professor: Who is asking?

Student: I am!

kelseyfrog 3 days ago | parent | prev | next [-]

Mine is. What evidence would you accept to change your mind?

handfuloflight 3 days ago | parent | prev [-]

Why should it have discrete identity and notion of self?

adastra22 3 days ago | parent | prev | next [-]

No you are not the only one. I am continuously mystified by the discussion surrounding this. Clause is absolutely and unquestionably an artificial general intelligence. But what people mean by “AGI” is a constantly shifting, never defined goalpost moving at sonic speed.

amanaplanacanal 2 days ago | parent [-]

What we envisioned with AGI is something like self directed learning, I think. Not just a better search engine.

slaterbug 2 days ago | parent | next [-]

Whether or not it is AGI, it seems very reductive to classify something like Claude Code as "just a better search engine".

adastra22 2 days ago | parent | prev [-]

Isn’t that unsupervised learning during training or fine-tuning?

root_axis 3 days ago | parent | prev [-]

The "basic" qualifier is just equivocating away all the reasons why it isn't AGI.