| ▲ | johnfn 4 hours ago | |||||||||||||||||||||||||
I don't think it's a meme. I'm not an AI doomer, but I can understand how AGI would be dangerous. In fact, I'm actually surprised that the argument isn't pretty obvious if you agree that AI agents do really confer productivity benefits. The easiest way I can see it is: do you think it would be a good idea today to give some group you don't like - I dunno, North Korea or ISIS, or even just some joe schmoe who is actually Ted Kaczynski, a thousand instances of Claude Code to do whatever they want? You probably don't, which means you understand that AI can be used to cause some sort of damage. Now extrapolate those feelings out 10 years. Would you give them 1000x whatever Claude Code is 10 years from now? Does that seem to be slightly dangerous? Certainly that idea feels a little leery to you? If so, congrats, you now understand the principles behind "AI leads to human extinction". Obviously, the probability that each of us assign to "human extinction caused by AI" depends very much on how steep the exponential curve climbs in the next 10 years. You probably don't have the graph climbing quite as steeply as Nick Bostrom does, but my personal feeling is even an AI agent in Feb 2026 is already a little dangerous in the wrong hands. | ||||||||||||||||||||||||||
| ▲ | FranklinJabar 4 hours ago | parent | next [-] | |||||||||||||||||||||||||
Is there any reason to think that intelligence (or computation) is the thing preventing these fears from coming true today and not, say, economics or politics? I think we greatly overestimate the possible value/utility of AGI to begin with | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | TheCapeGreek 4 hours ago | parent | prev [-] | |||||||||||||||||||||||||
I get what you're saying, but I don't think "someone else using a claude code against me" is the same argument as "claude code wakes up and decides I'm better off dead". | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||