| ▲ | RajT88 21 hours ago | |||||||
The paperclip problem is a bit hand-wavey about intelligence. It is taken as a given than unlimited intelligence would automatically win presumably because it could figure out how to do literally anything. But let's consider real life intelligence: - Our super geniuses do not take over the world. It is the generationally wealthy who do. - Super geniuses also have a tendency to be terribly neurotic, if not downright mentally ill. They can have trouble functioning in society. - There is no thought here about different kinds of intelligence and the roles they play. It is assumed there is only one kind, and AI will have it in the extreme. | ||||||||
| ▲ | 9dev 20 hours ago | parent [-] | |||||||
To be clear, I don't think the paperclip scenario is a realistic one. The point was that it's fairly easy to conceive an AI system that's simultaneously extremely savant and therefore dangerous in a single domain, yet entirely incapable of grasping the consequences or wider implications of its actions. None of us knows what an actual, artificial intelligence really looks like. I find it hard to draw conclusions from observing human super geniuses, when their minds may have next to nothing in common with the AI. Entirely different constraints might apply to them—or none at all. Having said all that, I'm pretty sceptical of an AI takeover doomsday scenario, especially if we're talking about LLMs. They may turn out to be good text generators, but not the road to AGI. But it's very hard to make accurate predictions in either direction. | ||||||||
| ||||||||