| ▲ | 9dev 20 hours ago | |
To be clear, I don't think the paperclip scenario is a realistic one. The point was that it's fairly easy to conceive an AI system that's simultaneously extremely savant and therefore dangerous in a single domain, yet entirely incapable of grasping the consequences or wider implications of its actions. None of us knows what an actual, artificial intelligence really looks like. I find it hard to draw conclusions from observing human super geniuses, when their minds may have next to nothing in common with the AI. Entirely different constraints might apply to them—or none at all. Having said all that, I'm pretty sceptical of an AI takeover doomsday scenario, especially if we're talking about LLMs. They may turn out to be good text generators, but not the road to AGI. But it's very hard to make accurate predictions in either direction. | ||
| ▲ | hunterpayne 12 hours ago | parent [-] | |
> The point was that it's fairly easy to conceive an AI system that's simultaneously extremely savant and therefore dangerous in a single domain, yet entirely incapable of grasping the consequences or wider implications of its actions. I'm pretty sure there are already humans who do this. Perhaps there are even entire conferences where the majority of people do this. | ||