Remix.run Logo
ryanSrich 4 hours ago

AGI is here. 90%+ of white collar work _can_ be done by an LLM. We are simply missing a tested orchestration layer. Speaking broadly about knowledge work here, there is almost nothing that a human is better at than Opus 4.6. Especially if you're a typical office worker whose job is done primarily on a computer, if that's all AGI is, then yeah, it's here.

causal 3 hours ago | parent | next [-]

Opus is the very best and I still throw away most of what it produces. If I did not carefully vet its work I would degrade my code bases so quickly. To accurately measure the value of AI you must include the negative in your sum.

ryanSrich 20 minutes ago | parent [-]

I would and have done the same with Jr. devs. It's not an argument against it being AGI.

dimitri-vs 27 minutes ago | parent | prev | next [-]

API Opus 4.6 will tell you it's still 2025, admit it's wrong then revert back to being convinced it's 2025 as it nears it's context limit.

I'll go so far as to say LLM agents are AGI-lite but saying we "just need the orchestration layer" is like saying ok we have a couple neurons, now we just need the rest of the human.

ryanSrich 21 minutes ago | parent [-]

Giving opus a memory or real-time access to the current year is trivial. I don't see how that's an argument against it being AGI.

loloquwowndueo 3 hours ago | parent | prev | next [-]

> there is almost nothing that a human is better at than Opus 4.6.

Lolwut. I keep having to correct Claude at trivial code organization tasks. The code it writes is correct; it’s just ham-fisted and violates DRY in unholy ways.

And I’m not even a great coder…

ryanSrich 18 minutes ago | parent | next [-]

This is entirely solvable with skills, memory, context, and further prompting. All of which can be done in a way that's reliable and repeatable.

You wouldn't expect a Jr. dev to be the best at keeping things dry either.

causal 3 hours ago | parent | prev | next [-]

> violates DRY in unholy ways

Well said

danenania 3 hours ago | parent | prev [-]

I’m very pro AI coding and use it all day long, but I also wouldn’t say “the code it writes is correct”. It will produce all kinds of bugs, vulnerabilities, performance problems, memory leaks, etc unless carefully guided.

ryanSrich 18 minutes ago | parent [-]

So it's even more human than we thought

JSDave 3 hours ago | parent | prev | next [-]

AGI is when it can do all intellectual work that can be done by humans. It can improve its own intelligence and create a feedback loop because it is as smart as the humans who created it.

ryanSrich 17 minutes ago | parent | next [-]

This has always been my personal definition of AGI. But the market and industry doesn't agree. So I've backed off on that and have more or less settled on "can do most of the knowledge work that a human can do"

pixl97 3 hours ago | parent | prev | next [-]

No, that is ASI. No human can do all intellectual work themselves. You have millions of different human models based on roughly the same architecture to do that.

When you have a single model that can do all you require, you are looking at something that can run billions of copies of itself and cause an intelligence explosion or an apocalypse.

JSDave 2 hours ago | parent [-]

"Artificial general intelligence (AGI) is a type of artificial intelligence that matches or surpasses human capabilities across virtually all cognitive tasks."

9x39 3 hours ago | parent | prev [-]

Why the super-high bar? What's unsatisfying is that aren't the 'dumbest' humans still a general intelligence that we're nearly past, depending how you squint and measure?

It feels like an arbitrary bar to perhaps make sure we aren't putting AIs over humans, which they are most certainly in the superhuman category on a rapidly growing number of tasks.

lysace 3 hours ago | parent | prev [-]

That "simple orchestration layer" (paraphrased) is what I consider the AGI.

But yeah, I suspect LLM:s may actually get close enough. "Just" add more reasoning loops and corresponding compute.

It is objectively grotesquely wasteful (a human brain operates on 12 to 25 watts and would vastly outperform something like that), but it would still be cataclysmic.

/layperson, in case that wasn't obvious

pixl97 3 hours ago | parent | next [-]

If we can get AI down to this power requirement then it's over for humans. Just think of how many copies of itself thinking at the levels of the smartest humans it could run at once. Also where all the hardware could hide itself and keep itself powered around the world.

jonas21 3 hours ago | parent | prev | next [-]

> a human brain operates on 12 to 25 watts

Yeah, but a human brain without the human attached to it is pretty useless. In the US, it averages out to around 2 kW per person for residential energy usage, or 9 kW if you include transportation and other primary energy usage too.

lysace 3 hours ago | parent [-]

Fair.

Maybe the Matrix (1999) with the human battery farms were on to something. :)

ryanSrich 3 hours ago | parent | prev [-]

I think "tested" is the hard part. The simple part seems to be there already, loops, crons, and computer use is getting pretty close.