▲ | 1718627440 5 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||
A machine, that is capable of performing human intelligence in every paradigm, according to a mathematical model, and scalable by increasing the frequency, power or duplicating it, because it is reproducible, is both "equivalent to human" and "smarter than all humans put together". When humans were capable of producing this, then this will be capable of improving itself and optimizing until the limit of information density. The only limit will be money as a proxy of available resources. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | bccdee 5 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
> scalable by increasing the frequency, power or duplicating it Well there's your problem. Very few things scale like that. Two people are not twice as smart as one person, nor are two instances of ChatGPT are twice as smart as one. One instance of ChatGPT running twice as fast isn't significantly smarter, and in fact, ChatGPT can never outrun its own hallucinations no matter how fast you overclock it. Intelligence is the most complex phenomenon in the universe. Why would it ever scale geometrically with anything? > When humans were capable of producing this, then this will be capable of improving itself and optimizing until the limit of information density. This doesn't follow. After all, humans are as smart as humans, and we can't really optimize ourselves beyond a superficial level (good nutrition, education, etc). Increasingly, AI is a black box. Assuming we do create a machine as smart as we are, why would it understand itself any better than we understand ourselves? And why wouldn't we hit some sort of technical roadblock at (arbitrarily) 1.5x human intelligence? Why do we assume that every problem becomes tractable once a computer is solving it? Imagine we applied this reasoning to cars: Over a matter of a century, cars went from 10 km/h to 100km/h to 500km/h to (in special vehicles) 1000km/h. Can we expect to see a 5000km/h car within the next century? No, that's unlikely; at such high speeds, you begin to hit intractable technical limits. Why should scaling intelligence just be smooth sailing forever? | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | osigurdson 4 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
I'd say it might scale like whatever your mathematical model is telling you, but it might not. I don't think we have a reasonable model for how human intelligence scales as the number of brains increases. Sometimes it feels more like attenuation than scaling in many meetings. |