Remix.run Logo
omidsa1 2 days ago

I also quite like the way he puts it. However, from a certain point onward, the AI itself will contribute to the development—adding nines—and that’s the key difference between this analogy of nines in other systems (including earlier domain‑specific ML ones) and the path to AGI. That's why we can expect fast acceleration to take off within two years.

breuleux 2 days ago | parent | next [-]

I don't think we can be confident that this is how it works. It may very well be that our level of intelligence has a hard limit to how many nines we can add, and AGI just pushes the limit further, but doesn't make it faster per se.

It may also be that we're looking at this the wrong way altogether. If you compare the natural world with what humans have achieved, for instance, both things are qualitatively different, they have basically nothing to do with each other. Humanity isn't "adding nines" to what Nature was doing, we're just doing our own thing. Likewise, whatever "nines" AGI may be singularly good at adding may be in directions that are orthogonal to everything we've been doing.

Progress doesn't really go forward. It goes sideways.

bamboozled a day ago | parent | next [-]

It's also assuming that all advances in AI just lead to cold hard gains, people have suggested this before but would a sentient AI get caught up in philosophical, silly or religious ideas? Silicone investor types seem to hope it's all just curing diseases they can profit from, but it might also be, "let's compose some music instead"?

Unit327 a day ago | parent [-]

AI doesn't have hopes and desires or something it would rather be doing. It has a utility function that it will optimise for regardless of all else. This doesn't change when it gets smarter, or even when it gets super-intelligence.

adventured 2 days ago | parent | prev | next [-]

Adding nines to nature is exactly what humans are doing. We are nature. We are part of the natural order.

Anything that exists is part of nature, there can be no exceptions.

If I go burn a forest down on purpose, that is in fact nature doing it. No different than if a dolphin kills another animal for fun or a chimp kills another chimp over a bit of territory. Insects are also every bit as 'vicious' in their conquests.

j45 2 days ago | parent | prev [-]

Intuition of someone who has put in a decade or two of wondering openly can't me discounted as easily as someone who might be a beginner to it.

AGI to encompass all of humanity's knowledge in one source and beat every human on every front might be a decade away.

Individual agents with increased agency adequately covering more and more abilities consistently? Seems like a steady path that can be seen into the horizon to put one foot in front of the other.

For me, the grain of salt I'd take Karpathy with is much, much, smaller than average, only because he tries to share how he thinks and examines his own understanding and changes it.

His ability to explain complex things simply is something that for me helps me learn and understand things quicker and see if I arrive at something similar or different, and not immediately assume anything is wrong, or right without my understanding being present.

rpcope1 2 days ago | parent | prev | next [-]

> However, from a certain point onward, the AI itself will contribute to the development—adding nines—and that’s the key difference between this analogy of nines in other systems (including earlier domain‑specific ML ones) and the path to AGI.

There's a massive planet-sized CITATION NEEDED here, otherwise that's weapons grade copium.

aughtdev a day ago | parent | prev | next [-]

I doubt this. General intelligence will be a step change not a gentle ramp. If we get to an architecture intelligent enough to meaningfully contribute to AI development, we'll have already made it. It'll simply be a matter of scale. There's no 99% AGI that can help build 100% AGI but for some reason can't drive a car or cook a meal or work an office job.

Yoric 2 days ago | parent | prev | next [-]

It's a possibility, but far from certainty.

If you look at it differently, assembly language may have been one nine, compilers may have been the next nine, successive generations of language until ${your favorite language} one more nine, and yet, they didn't get us noticeably closer to AGI.

a day ago | parent | prev | next [-]
[deleted]
AnimalMuppet 2 days ago | parent | prev | next [-]

Isn't that one of the measures of when it becomes an AGI? So that doesn't help you with however many nines we are away from getting an AGI.

Even if you don't like that definition, you still have the question of how many nines we are away from having an AI that can contribute to its own development.

I don't think you know the answer to that. And therefore I think your "fast acceleration within two years" is unsupported, just wishful thinking. If you've got actual evidence, I would like to hear it.

ben_w 2 days ago | parent | next [-]

AI has been helping with the development of AI ever since at least the first optimising compiler or formal logic circuit verification program.

Machine learning has been helping with the development of machine learning ever since hyper-parameter optimisers became a thing.

Transformers have been helping with the development of transformer models… I don't know exactly, but it was before ChatGPT came out.

None of the initials in AGI are booleans.

But I do agree that:

> "fast acceleration within two years" is unsupported, just wishful thinking

Nobody has any strong evidence of how close "it" is, or even a really good shared model of what "it" even is.

scragz 2 days ago | parent | prev [-]

AGI is when it is general. a narrow AI trained only on coding and training AIs would contribute to the acceleration without being AGI itself.

techblueberry a day ago | parent | prev [-]

I think the 9's include this assumption.