Remix.run Logo
K0balt a day ago

We will never achieve AGI, because we keep moving the goalposts.

SOTA models are already capable of outperforming any human on earth in a dizzying array of ways, especially when you consider scale.

Humans also produce nonsensical, useless output. Lots of it.

Yes, LLMs have many limitations that humans easily transcend.

But few if any humans on earth can demonstrate the breadth and depth of competence that a SOTA model possesses.

Relatively few (probably less than half) are casually capable of the level of reasoning that LLMs exhibit.

And, more importantly, as anyone in the field when neural networks were new is aware, AGI never meant human level intelligence until the LLM age. It just meant that a system could generalize one domain from knowledge gained in other domains without supervision or programming.

rlanday 3 hours ago | parent | next [-]

> SOTA models are already capable of outperforming any human on earth in a dizzying array of ways, especially when you consider scale.

So why are so many people still employed as e.g. software engineers? People aren’t prompting the models correctly? They’re only asking 10 times instead of 20? They’re holding it wrong?

K0balt an hour ago | parent [-]

Long form engineering tasks aren’t doable yet without supervision. But I can say in our shop, we won’t be hiring any more junior devs, ever, except as (in my region, free) interns or because of some extraordinary capabilities, insights, or skills. There just isn’t any business case for hiring junior devs to do the grunt work anymore.

But, the vast majority of work that is done in the world is not in the same order of magnitude of complexity or rigor that is required by long form engineering.

While models may not outperform an experienced developer, they will likely outperform her junior assistant, and a dev using ai effectively will almost certainly outperform a team of three without ai, in most cases.

The salient fact here is not that the human is outperformed by the model in a narrow field of extraordinary capability, but rather that the model can outperform that dev in 100 other disciplines, and outperform most people in almost any cerebral task.

My claim is not that models outperform people in all tasks, but that models outperform all people at many tasks, and I think that holds true with some caveats, especially when you factor in speed and scale.

naveen99 10 minutes ago | parent [-]

What does junior or senior have anything to do with it ? I would think a smarter junior will run circles around a dumber senior engineer with LLM autocomplete.

alganet a day ago | parent | prev [-]

> We will never achieve AGI, because we keep moving the goalposts.

I think it's fair to do it to the idea of AGI.

Moving the goalpost is often seen as a bad thing (like, shifting arguments around). However, in a more general sense, it's our special human sauce. We get better at stuff, then raise the bar. I don't see a reason why we should give LLMs a break if we can be more demanding of them.

> SOTA models are already capable of outperforming any human on earth in a dizzying array of ways, especially when you consider scale.

Performance should include energy consumption. Humans are incredibly efficient at being smart while demanding very little energy.

> But few if any humans on earth can demonstrate the breadth and depth of competence that a SOTA model possesses.

What if we could? What if education mostly stopped improving in 1820 and we're still learning physics at school by doing exercises about train collisions and clock pendulums?

K0balt 16 hours ago | parent [-]

I’m with you on the energy and limitations, and even on the moving of goalposts.

I’d like to add that I think limit definition of AGI has jumped the shark though and is already at ASI, since we expect our machine to exhibit professional level acumen across such a wide range of knowledge that it would be similar to the 0.01 percent top career scholars and engineers, or even above any known human capacity just due to breadth of knowledge. And we also expect it to provide that level of focused interaction to a small city of people all at the same time / provide that knowledge 10,000 times faster than any human can.

I think definitionally that is ASÍ.

But I also think AGI that “we are still chasing” focus-groups a lot better than ASI which is legitimately scary as shit to the average Joe, and which seasoned engineers recognize as a significant threat if controlled by people with misaligned intentions.

PR needs us to be “approaching AGI”, not “closing in on ASI”, or we would be pinned down with prohibitive regulatory straitjackets in no time.

alganet 16 hours ago | parent [-]

As much regulatory measures as possible seems good. This things are not toys.

K0balt 16 hours ago | parent [-]

Yeah, it’s definitely some kind of new chapter. It’s reducing hiring, and will drive unemployment, no matter what people are saying. It’s a poison pill in a way, since no one will hire junior staff anymore. The reliance on AI will skyrocket as experienced staff ages out and there are no replacements coming up through the ranks.

alganet 16 hours ago | parent [-]

https://en.wikipedia.org/wiki/Chewbacca_defense

K0balt 7 hours ago | parent [-]

I may have missed the target of this reference, but I enjoyed it nonetheless. The CD definitely seems to be gaining traction in the last decade.