▲ | bccdee 5 days ago | |||||||||||||||||||||||||||||||||||||||||||
> scalable by increasing the frequency, power or duplicating it Well there's your problem. Very few things scale like that. Two people are not twice as smart as one person, nor are two instances of ChatGPT are twice as smart as one. One instance of ChatGPT running twice as fast isn't significantly smarter, and in fact, ChatGPT can never outrun its own hallucinations no matter how fast you overclock it. Intelligence is the most complex phenomenon in the universe. Why would it ever scale geometrically with anything? > When humans were capable of producing this, then this will be capable of improving itself and optimizing until the limit of information density. This doesn't follow. After all, humans are as smart as humans, and we can't really optimize ourselves beyond a superficial level (good nutrition, education, etc). Increasingly, AI is a black box. Assuming we do create a machine as smart as we are, why would it understand itself any better than we understand ourselves? And why wouldn't we hit some sort of technical roadblock at (arbitrarily) 1.5x human intelligence? Why do we assume that every problem becomes tractable once a computer is solving it? Imagine we applied this reasoning to cars: Over a matter of a century, cars went from 10 km/h to 100km/h to 500km/h to (in special vehicles) 1000km/h. Can we expect to see a 5000km/h car within the next century? No, that's unlikely; at such high speeds, you begin to hit intractable technical limits. Why should scaling intelligence just be smooth sailing forever? | ||||||||||||||||||||||||||||||||||||||||||||
▲ | 1718627440 5 days ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||
> Very few things scale like that. I wasn't talking about two instances for scaling smartness, I meant applying two instances to different problems. That very much scales. > This doesn't follow. After all, humans are as smart as humans ... In the hypothetical case of humans capable of producing the one true AI system (real AI or AGI or however its called, because marketing has taken the previous term), then this system is capable of producing another system by definition. Humans are capable of following Moores law, so this system will as well. So this chain of system will explore the set of all possible intelligent systems restricted only by resources. It isn't bound by inner problems like "(good nutrition, education, etc)", because it is a mathematical model, its physical representation does only matter as so far as it needs to exist in this hypothetical case. > AI is a black box In this case, the black box "humans" was able to produce another thing reproducing their intelligence. So we have understood ourselves better than we currently do. Note, that every intelligent system is completely able to be simulated by a large enough non intelligent statistical system, so intelligence isn't inferable from a set of inputs -> outputs. It's really the same as with consciousness. > And why wouldn't we hit some sort of technical roadblock? Can we expect to see a 5000km/h car? Yes. We are capable of accelerating "objects" to 0.99..c. It's not impossible for us to accelerate a "car" to nearly light speed, we "just" need enough energy (meaning matter as energy). > technical roadblock at (arbitrarily) 1.5x human intelligence I wrote "until the limit of information density". Whatever this may be. I intended to point out, why a system "equivalent to human" is actually equivalent to "digital super intelligence meaning 'smarter than all humans put together'". --- You don't need to tell me you don't think this system will exist. I think this will end the same as the attempts to build a machine creating energy. My personal understanding is this: A system (humans) can never completely "understand" itself, because it's "information size" is as large as itself, but to contain something, it needs to be larger then this. In addition that "understanding" needs to be also included in its "information size" so the size to understand has at least doubled then. This means that the largest system capable of "understanding" itself has the size of 0. In other words understanding something means knowing the whole thing and abstracting to a higher level then the abstractness of the system to be understood. But when the system tries to understand itself, it's always looking for yet another higher abstraction to infinity, as each abstraction it finds is not yet enough. This idea comes from the fact, that you can't prove that every implementation of a mathematical model has some behaviour, without formalizing every possible model, in other words inventing another higher model, in other words abstracting. | ||||||||||||||||||||||||||||||||||||||||||||
|