▲ | 1718627440 5 days ago | ||||||||||||||||||||||||||||||||||
> Very few things scale like that. I wasn't talking about two instances for scaling smartness, I meant applying two instances to different problems. That very much scales. > This doesn't follow. After all, humans are as smart as humans ... In the hypothetical case of humans capable of producing the one true AI system (real AI or AGI or however its called, because marketing has taken the previous term), then this system is capable of producing another system by definition. Humans are capable of following Moores law, so this system will as well. So this chain of system will explore the set of all possible intelligent systems restricted only by resources. It isn't bound by inner problems like "(good nutrition, education, etc)", because it is a mathematical model, its physical representation does only matter as so far as it needs to exist in this hypothetical case. > AI is a black box In this case, the black box "humans" was able to produce another thing reproducing their intelligence. So we have understood ourselves better than we currently do. Note, that every intelligent system is completely able to be simulated by a large enough non intelligent statistical system, so intelligence isn't inferable from a set of inputs -> outputs. It's really the same as with consciousness. > And why wouldn't we hit some sort of technical roadblock? Can we expect to see a 5000km/h car? Yes. We are capable of accelerating "objects" to 0.99..c. It's not impossible for us to accelerate a "car" to nearly light speed, we "just" need enough energy (meaning matter as energy). > technical roadblock at (arbitrarily) 1.5x human intelligence I wrote "until the limit of information density". Whatever this may be. I intended to point out, why a system "equivalent to human" is actually equivalent to "digital super intelligence meaning 'smarter than all humans put together'". --- You don't need to tell me you don't think this system will exist. I think this will end the same as the attempts to build a machine creating energy. My personal understanding is this: A system (humans) can never completely "understand" itself, because it's "information size" is as large as itself, but to contain something, it needs to be larger then this. In addition that "understanding" needs to be also included in its "information size" so the size to understand has at least doubled then. This means that the largest system capable of "understanding" itself has the size of 0. In other words understanding something means knowing the whole thing and abstracting to a higher level then the abstractness of the system to be understood. But when the system tries to understand itself, it's always looking for yet another higher abstraction to infinity, as each abstraction it finds is not yet enough. This idea comes from the fact, that you can't prove that every implementation of a mathematical model has some behaviour, without formalizing every possible model, in other words inventing another higher model, in other words abstracting. | |||||||||||||||||||||||||||||||||||
▲ | bccdee 4 days ago | parent [-] | ||||||||||||||||||||||||||||||||||
> I meant applying two instances to different problems. That very much scales. You can't double the speed at which you solve a problem by splitting it in two and assigning one person to each half. Fred Brooks wrote a whole book about how this doesn't scale. > this system is capable of producing another system by definition Yeah, humans can produce other humans too. We're talking about whether that system can produce an improved system, which isn't necessarily true. The design could easily be a local maximum with no room for improvement. > Humans are capable of following Moores law Not indefinitely. Technical limitations eventually cause us to hit a point of diminishing returns. Technological progress follows a sigmoid curve, not an exponential curve. > It isn't bound by inner problems like "(good nutrition, education, etc)", because it is a mathematical model It's an engineering problem, not a math problem. Transistors only get so small, memory access only gets so fast. There are practical limits to what we can do with information. > We are capable of accelerating "objects" to 0.99..c. Are we? In practice? Because it's one thing to say, "the laws of physics don't prohibit it," and quite another to do it with real machines in the real world. > > technical roadblock at (arbitrarily) 1.5x human intelligence > I wrote "until the limit of information density". Yeah, I know: That's wildly optimistic, because it assumes technological progress goes on forever without ever getting stuck at local maxima. Who's to say that it doesn't require at least 300IQ of intelligence to come up with the paradigm shift required to build a 200IQ brain? That would mean machines are capped at 200IQ forever. > Note, that every intelligent system is completely able to be simulated by a large enough non intelligent statistical system, so intelligence isn't inferable from a set of inputs -> outputs. This is circular. If a non-intelligent statistical system is simulating intelligence, then it is an intelligent system. Intelligence is a thing that can be done, and it is doing it. > A system (humans) can never completely "understand" itself, because it's "information size" is as large as itself, but to contain something, it needs to be larger then this. I don't think this logic checks out. You can fit all the textbooks and documentation describing how a 1TB hard drive works on a 1TB hard drive with plenty of room to spare. Your idea feels intuitively true, but I don't see any reason why it should necessarily be true. | |||||||||||||||||||||||||||||||||||
|