Remix.run Logo
glitchc 6 days ago

It doesn't need to be AGI to build complex software. A human software developer can build a complex software system and perform other complex tasks with the same body (play an instrument, fly an aircraft, etc.). Doing all of that with the same resources is what AGI is needed for. Just software, well I'm sure an LLM can eventually become an expert just like it learnt how to play Go.

osigurdson 6 days ago | parent [-]

AGI usually means "equivalent to human" while digital super intelligence generally means "smarter than all humans put together". In any case I agree that once we reach "equivalent to human" naturally it can do anything we do. That should be enough to end office jobs imo.

1718627440 5 days ago | parent [-]

A machine, that is capable of performing human intelligence in every paradigm, according to a mathematical model, and scalable by increasing the frequency, power or duplicating it, because it is reproducible, is both "equivalent to human" and "smarter than all humans put together". When humans were capable of producing this, then this will be capable of improving itself and optimizing until the limit of information density. The only limit will be money as a proxy of available resources.

bccdee 5 days ago | parent | next [-]

> scalable by increasing the frequency, power or duplicating it

Well there's your problem. Very few things scale like that. Two people are not twice as smart as one person, nor are two instances of ChatGPT are twice as smart as one. One instance of ChatGPT running twice as fast isn't significantly smarter, and in fact, ChatGPT can never outrun its own hallucinations no matter how fast you overclock it.

Intelligence is the most complex phenomenon in the universe. Why would it ever scale geometrically with anything?

> When humans were capable of producing this, then this will be capable of improving itself and optimizing until the limit of information density.

This doesn't follow. After all, humans are as smart as humans, and we can't really optimize ourselves beyond a superficial level (good nutrition, education, etc). Increasingly, AI is a black box. Assuming we do create a machine as smart as we are, why would it understand itself any better than we understand ourselves?

And why wouldn't we hit some sort of technical roadblock at (arbitrarily) 1.5x human intelligence? Why do we assume that every problem becomes tractable once a computer is solving it? Imagine we applied this reasoning to cars: Over a matter of a century, cars went from 10 km/h to 100km/h to 500km/h to (in special vehicles) 1000km/h. Can we expect to see a 5000km/h car within the next century? No, that's unlikely; at such high speeds, you begin to hit intractable technical limits. Why should scaling intelligence just be smooth sailing forever?

1718627440 5 days ago | parent [-]

> Very few things scale like that.

I wasn't talking about two instances for scaling smartness, I meant applying two instances to different problems. That very much scales.

> This doesn't follow. After all, humans are as smart as humans ...

In the hypothetical case of humans capable of producing the one true AI system (real AI or AGI or however its called, because marketing has taken the previous term), then this system is capable of producing another system by definition. Humans are capable of following Moores law, so this system will as well. So this chain of system will explore the set of all possible intelligent systems restricted only by resources. It isn't bound by inner problems like "(good nutrition, education, etc)", because it is a mathematical model, its physical representation does only matter as so far as it needs to exist in this hypothetical case.

> AI is a black box

In this case, the black box "humans" was able to produce another thing reproducing their intelligence. So we have understood ourselves better than we currently do.

Note, that every intelligent system is completely able to be simulated by a large enough non intelligent statistical system, so intelligence isn't inferable from a set of inputs -> outputs. It's really the same as with consciousness.

> And why wouldn't we hit some sort of technical roadblock? Can we expect to see a 5000km/h car?

Yes. We are capable of accelerating "objects" to 0.99..c. It's not impossible for us to accelerate a "car" to nearly light speed, we "just" need enough energy (meaning matter as energy).

> technical roadblock at (arbitrarily) 1.5x human intelligence

I wrote "until the limit of information density". Whatever this may be.

I intended to point out, why a system "equivalent to human" is actually equivalent to "digital super intelligence meaning 'smarter than all humans put together'".

---

You don't need to tell me you don't think this system will exist. I think this will end the same as the attempts to build a machine creating energy. My personal understanding is this: A system (humans) can never completely "understand" itself, because it's "information size" is as large as itself, but to contain something, it needs to be larger then this. In addition that "understanding" needs to be also included in its "information size" so the size to understand has at least doubled then. This means that the largest system capable of "understanding" itself has the size of 0.

In other words understanding something means knowing the whole thing and abstracting to a higher level then the abstractness of the system to be understood. But when the system tries to understand itself, it's always looking for yet another higher abstraction to infinity, as each abstraction it finds is not yet enough.

This idea comes from the fact, that you can't prove that every implementation of a mathematical model has some behaviour, without formalizing every possible model, in other words inventing another higher model, in other words abstracting.

bccdee 4 days ago | parent [-]

> I meant applying two instances to different problems. That very much scales.

You can't double the speed at which you solve a problem by splitting it in two and assigning one person to each half. Fred Brooks wrote a whole book about how this doesn't scale.

> this system is capable of producing another system by definition

Yeah, humans can produce other humans too. We're talking about whether that system can produce an improved system, which isn't necessarily true. The design could easily be a local maximum with no room for improvement.

> Humans are capable of following Moores law

Not indefinitely. Technical limitations eventually cause us to hit a point of diminishing returns. Technological progress follows a sigmoid curve, not an exponential curve.

> It isn't bound by inner problems like "(good nutrition, education, etc)", because it is a mathematical model

It's an engineering problem, not a math problem. Transistors only get so small, memory access only gets so fast. There are practical limits to what we can do with information.

> We are capable of accelerating "objects" to 0.99..c.

Are we? In practice? Because it's one thing to say, "the laws of physics don't prohibit it," and quite another to do it with real machines in the real world.

> > technical roadblock at (arbitrarily) 1.5x human intelligence

> I wrote "until the limit of information density".

Yeah, I know: That's wildly optimistic, because it assumes technological progress goes on forever without ever getting stuck at local maxima. Who's to say that it doesn't require at least 300IQ of intelligence to come up with the paradigm shift required to build a 200IQ brain? That would mean machines are capped at 200IQ forever.

> Note, that every intelligent system is completely able to be simulated by a large enough non intelligent statistical system, so intelligence isn't inferable from a set of inputs -> outputs.

This is circular. If a non-intelligent statistical system is simulating intelligence, then it is an intelligent system. Intelligence is a thing that can be done, and it is doing it.

> A system (humans) can never completely "understand" itself, because it's "information size" is as large as itself, but to contain something, it needs to be larger then this.

I don't think this logic checks out. You can fit all the textbooks and documentation describing how a 1TB hard drive works on a 1TB hard drive with plenty of room to spare. Your idea feels intuitively true, but I don't see any reason why it should necessarily be true.

1718627440 4 days ago | parent [-]

> You can't double the speed

I only need two instances to be faster then a single one. This means the human having the resources to run the system is unbound to do anything an infinite number of humans can do regarding his own time and energy.

> Yeah, humans can produce other humans too

In this hypothetical scenario humans were able to build "AI" (including formalized, deterministic and reproducible). A system as capable as a human (=AI) is then able to produce many such systems.

> There are practical limits to what we can do with information.

Yes, but we are nowhere near this limits yet.

> Are we? In practice?

Yes. We are able to build a particle accelerator. Given enough resources, we can have enough particle generators as there are particles in a car.

> That would mean machines are capped at 200IQ forever.

Except when the 300IQ thing is found by chance. When the system is reproducible and you aren't bound by resources, then a small chance means nothing.

> This is circular.

No it just means intelligence is not attributable to a black box. We don't think other humans are intelligent solely by their behaviour, we conclude that they are similar then us and we have introspection into us.

> You can fit all the textbooks and documentation describing how a 1TB hard drive works on a 1TB hard drive with plenty of room to spare.

It's not about encoding the result of having understood. A human is very much capable of computing according to the nature of a human. It's about the process of understanding itself. The harddrive can store this, it can't create it. Try to build a machine that makes predictions about itself including the lowest level of itself. You won't get faster then time.

bccdee 4 days ago | parent [-]

> Yes, but we are nowhere near this limits yet.

Says who?

> Given enough resources, we can have enough particle generators as there are particles in a car.

Given by whom? I said in practice—you can't just assume limitless resources.

> Except when the 300IQ thing is found by chance. When the system is reproducible and you aren't bound by resources, then a small chance means nothing.

We're bound by resources! Highly so! Stop trying to turn practical questions about what humans can actually accomplish into infinite-monkey-infinite-typewriter thought experiments.

> We don't think other humans are intelligent solely by their behaviour

I wouldn't say that, haha

> It's not about encoding the result of having understood. It's about the process of understanding itself.

A process can be encoded into data. Let's assume it takes X gigabytes to encode comprehension of how a hard drive array works. Since data storage does not grow significantly more complex with size (only physically larger), it stands to reason that an X-GB hard drive array can handily store the process for its own comprehension.

1718627440 4 days ago | parent [-]

> Says who?

Because I think we haven't even started. Where is the proof based system able to invent every possible thought paradigm of humans a priori? I think we are so far away from anything like this, we can't even describe the limits. Maybe we will never have and never do.

> you can't just assume limitless resources

I assumed that, because the resource limits of a very rich human (meaning for whom money is never the limit) and the one true AI system are not different in my opinion.

> comprehension

Comprehension is already the result. But I don't think this is a sound definable concept, so maybe I should stop defending this.

bccdee 2 days ago | parent [-]

> Where is the proof based system able to invent every possible thought paradigm of humans a priori?

Beyond the realm of feasibility, I'd imagine. The gulf between what is theoretically possible and what is realistically doable is gargantuan.

> I assumed that, because the resource limits of a very rich human (meaning for whom money is never the limit)

The resources of a very rich human are extremely limited, in the grand scheme of things. They can only mobilize so much of the global economy, and even the entire global economy is only capable of doing so much. That's what I'm getting at: Just because there's some theoretical configuration of matter that would constitute a superintelligence, does not guarantee that humanity, collectively, is capable of producing it. Some things are just beyond us.

osigurdson 4 days ago | parent | prev [-]

I'd say it might scale like whatever your mathematical model is telling you, but it might not. I don't think we have a reasonable model for how human intelligence scales as the number of brains increases. Sometimes it feels more like attenuation than scaling in many meetings.