| ▲ | TheCapeGreek 4 hours ago | |
I get what you're saying, but I don't think "someone else using a claude code against me" is the same argument as "claude code wakes up and decides I'm better off dead". | ||
| ▲ | johnfn 3 hours ago | parent | next [-] | |
I use this argument because it has a lot fewer logical leaps than the "claude code decides to murder me" argument, but it turns out that if you are on the side of "AI is probably dangerous in the wrong hands" you are actually more in agreement than not with the AI safety people - it's just a matter of degree now :) | ||
| ▲ | goatlover 2 hours ago | parent | prev [-] | |
More like Claude Code's ancestor has human-level autonomy with generalized superhuman abilities and is connected to everything. We task it with solving difficult global problems, but we can't predict how it will do so. The risk is it will optimize one or more of those goals in a way that threatens human existence. It could be that it decides to keep increasing it's capacity to solve the problems, and humans end up being in the way. Or it's militarized to defeat other powerful AI-enhanced militaries, and we have WW3. More likely though AGI would cause economic crash from automating too many jobs too quickly. | ||