| ▲ | HeavyStorm a day ago |
| Nay-sayers need to decide whether they fear AI because AI is dumb and will fuckup or because AI is smart and will take over. |
|
| ▲ | victorbjorklund 21 hours ago | parent | next [-] |
| Silly calling Simon a nay-sayer. Are you a fanatic that thinks anyone saying that there are any limitations to current models = nay-sayer? Like if someone says they wouldnt wanna get a heart transplant operation done purely by GPT5, are they a nay-sayer or is that just reflecting reality? |
|
| ▲ | tossandthrow a day ago | parent | prev | next [-] |
| Simon willson is definitely not a nay sayer. |
|
| ▲ | 9dev a day ago | parent | prev | next [-] |
| Both are valid concerns, no need to decide. Take the USA: They are currently lead by a patently dumb president who fucks up the global economy, and at the same time they are powerful enough to do so! For a more serious example, consider the Paperclip Problem[0] for a very smart system that destroys the world due to very dumb behaviour. [0]: https://cepr.org/voxeu/columns/ai-and-paperclip-problem |
| |
| ▲ | RajT88 21 hours ago | parent [-] | | The paperclip problem is a bit hand-wavey about intelligence. It is taken as a given than unlimited intelligence would automatically win presumably because it could figure out how to do literally anything. But let's consider real life intelligence: - Our super geniuses do not take over the world. It is the generationally wealthy who do. - Super geniuses also have a tendency to be terribly neurotic, if not downright mentally ill. They can have trouble functioning in society. - There is no thought here about different kinds of intelligence and the roles they play. It is assumed there is only one kind, and AI will have it in the extreme. | | |
| ▲ | 9dev 20 hours ago | parent [-] | | To be clear, I don't think the paperclip scenario is a realistic one. The point was that it's fairly easy to conceive an AI system that's simultaneously extremely savant and therefore dangerous in a single domain, yet entirely incapable of grasping the consequences or wider implications of its actions. None of us knows what an actual, artificial intelligence really looks like. I find it hard to draw conclusions from observing human super geniuses, when their minds may have next to nothing in common with the AI. Entirely different constraints might apply to them—or none at all. Having said all that, I'm pretty sceptical of an AI takeover doomsday scenario, especially if we're talking about LLMs. They may turn out to be good text generators, but not the road to AGI. But it's very hard to make accurate predictions in either direction. | | |
| ▲ | hunterpayne 12 hours ago | parent [-] | | > The point was that it's fairly easy to conceive an AI system that's simultaneously extremely savant and therefore dangerous in a single domain, yet entirely incapable of grasping the consequences or wider implications of its actions. I'm pretty sure there are already humans who do this. Perhaps there are even entire conferences where the majority of people do this. |
|
|
|
|
| ▲ | masswerk 17 hours ago | parent | prev [-] |
| Our product has many issues. You must pick one and must not discuss any other. |