Remix.run Logo
johnfn 4 hours ago

I don't think it's a meme. I'm not an AI doomer, but I can understand how AGI would be dangerous. In fact, I'm actually surprised that the argument isn't pretty obvious if you agree that AI agents do really confer productivity benefits.

The easiest way I can see it is: do you think it would be a good idea today to give some group you don't like - I dunno, North Korea or ISIS, or even just some joe schmoe who is actually Ted Kaczynski, a thousand instances of Claude Code to do whatever they want? You probably don't, which means you understand that AI can be used to cause some sort of damage.

Now extrapolate those feelings out 10 years. Would you give them 1000x whatever Claude Code is 10 years from now? Does that seem to be slightly dangerous? Certainly that idea feels a little leery to you? If so, congrats, you now understand the principles behind "AI leads to human extinction". Obviously, the probability that each of us assign to "human extinction caused by AI" depends very much on how steep the exponential curve climbs in the next 10 years. You probably don't have the graph climbing quite as steeply as Nick Bostrom does, but my personal feeling is even an AI agent in Feb 2026 is already a little dangerous in the wrong hands.

FranklinJabar 4 hours ago | parent | next [-]

Is there any reason to think that intelligence (or computation) is the thing preventing these fears from coming true today and not, say, economics or politics? I think we greatly overestimate the possible value/utility of AGI to begin with

johnfn 3 hours ago | parent [-]

I mean, sure, but I don't want to give my aggressive enemies a bunch of weapons to use against me if I don't have to - even if that's not the primary thing I am concerned about.

FranklinJabar 2 hours ago | parent [-]

Right but how would a chatbot be considered a weapon? Unless you're engaged in an astroturfing war on reddit it doesn't seem very useful.

Most forms of power are more proportional to how much capital you control than anything related to intelligence.

johnfn 2 hours ago | parent [-]

Consider that an iPhone zero-day could be used to blackmail state officials or exfiltrate government secrets. This isn't even hypothetical; Pegasus[1] exists, and an iPhone zero-day was used to blackmail Jeff Bezos[2]. This was funded by NSO group. Opus is already digging up security vulnerabilities[3] - imagine if those guys had 1000x instances of Claude Code to search for iPhone zero days 24/7. I think we can both agree that wouldn't be good.

[1]: https://en.wikipedia.org/wiki/Pegasus_(spyware) [2]: https://medium.com/@jeffreypbezos/no-thank-you-mr-pecker-146... [3]: https://news.ycombinator.com/item?id=46902909

TheCapeGreek 4 hours ago | parent | prev [-]

I get what you're saying, but I don't think "someone else using a claude code against me" is the same argument as "claude code wakes up and decides I'm better off dead".

johnfn 3 hours ago | parent | next [-]

I use this argument because it has a lot fewer logical leaps than the "claude code decides to murder me" argument, but it turns out that if you are on the side of "AI is probably dangerous in the wrong hands" you are actually more in agreement than not with the AI safety people - it's just a matter of degree now :)

goatlover 2 hours ago | parent | prev [-]

More like Claude Code's ancestor has human-level autonomy with generalized superhuman abilities and is connected to everything. We task it with solving difficult global problems, but we can't predict how it will do so. The risk is it will optimize one or more of those goals in a way that threatens human existence. It could be that it decides to keep increasing it's capacity to solve the problems, and humans end up being in the way.

Or it's militarized to defeat other powerful AI-enhanced militaries, and we have WW3.

More likely though AGI would cause economic crash from automating too many jobs too quickly.