| ▲ | jqpabc123 12 hours ago |
| Why is this surprising? Nuclear weapons are available. AI has limited real world experience or grasp of the consequences. Nuke 'em seems like the obvious choice --- for something with a grade school mentality. Similar deficits in reasoning are manifested in AI results every day. Let's fire 'em and hire AI seems like the obvious choice --- for someone with a grade school mentality and blinded by greed. |
|
| ▲ | roxolotl 12 hours ago | parent | next [-] |
| So I’ve made very similar comments in the past. This isn’t new information or news. But that doesn’t mean it’s not important to continue to tell people. 3 years ago the state of the art security researchers were pounding the drum on “never connect these things to the internet”. But as we’re now seeing with OpenClaw people have no interest in following that advice. |
| |
| ▲ | TheNewsIsHere 11 hours ago | parent [-] | | As someone who frequently says “don’t connect these $things” to the Internet, I appreciate the boost. Half my compute vendors are raising prices because of this insanity. |
|
|
| ▲ | pibaker 6 hours ago | parent | prev | next [-] |
| I feel this reflects a deeper problem with letting AI do any kind of decision making. They have no real world experience. They feel no real world consequences. They have no real stake in any decision they make. Human societies get to control their members' actions by imposing real life consequences. A company can fire you, a partner can divorce you, the state can jail you, the public can shame you. None of these works on the current crop of LLM based AI systems, which as far as I can tell are only trained to handle very narrow tasks where they don't need to even worry about keeping themselves alive. How do you make AIs work in a society? I don't know. Maybe the best move is to not play the game. |
| |
| ▲ | 4er_transform 3 hours ago | parent | next [-] | | Or make them part of the consequences. Give them skin in the game. “Let’s not use AI” is dumb and impossible | |
| ▲ | jqpabc123 5 hours ago | parent | prev | next [-] | | Maybe the best move is to not play the game. This is the path Apple has taken. But the best possible move is to make money from it. Short the "Magnificent 7" stocks --- buy "SQQQ" ETF --- when the time is *right*. | | |
| ▲ | compass_copium 5 hours ago | parent [-] | | Ah, just time the collapse perfectly. Wish I'd thought of that ;) | | |
| ▲ | jqpabc123 5 hours ago | parent [-] | | Timing it "perfectly" is impossible unless you're psychic or very lucky. The good news is you don't have to be perfect. You can be late and still make money. The important thing is to be prepared and ready to pounce. When AI blows, it's going to take the whole stock market down with it. |
|
| |
| ▲ | f38 5 hours ago | parent | prev | next [-] | | > They have no real world experience. They feel no real world consequences. They have no real stake in any decision they make. Why do you let politicians do any kind of decision making? | | |
| ▲ | goatlover 5 hours ago | parent [-] | | Politicians can be voted out, forced to resign, sometimes removed from office and even occasionally jailed. They also inhabit the same world a nuclear war would make much less nicer. | | |
| ▲ | f38 3 hours ago | parent [-] | | Nuclear war aside (we're talking any kind of decision making here) politicians face very small consequences for harmful decisions, usually at most losing that high-paid job (and getting another high-paid job). |
|
| |
| ▲ | Bender 5 hours ago | parent | prev [-] | | They have no real stake in any decision they make. And they are not human. Not even a sociopath or psychopathic human. At best they might be able to estimate casualties. LLM's probably can't even reach the logic conclusion of the fictional WOPR Joshua from the movie Wargames [1]. Make LLM's win every game of tic-tac-toe and see if it reaches the same conclusion of WOPR. [1] ... Edit: (Answering my own question) From Gemini: Yes, many LLMs (GPT-4, Claude 3, Llama 3) have been tested on Tic-Tac-Toe, and they generally perform poorly, often playing at or below the level of random chance. While they can understand the rules, they struggle with spatial reasoning, often trying to place a piece in an occupied spot, forgetting to block opponents, or failing to win. If LLM's can't even figure out tic-tac-toe then surely do not give these things the ability to launch any kind of weapon. Not even rubber bands. [1] - https://www.youtube.com/watch?v=s93KC4AGKnY [video][6m][tic-tac-toe] | | |
| ▲ | sheiyei 5 hours ago | parent [-] | | Which makes them so great for making difficult (often bad) decisions – it wasn't me, it was the "objective" and "neutral" "superintelligence" which I totally didn't give a suggestive prompt. |
|
|
|
| ▲ | xiphias2 12 hours ago | parent | prev | next [-] |
| ,,AI has limited real world experience or grasp of the consequences.'' People in the world have limited experience about war. We're living in a world where doing terrible things with 1000 people with photo/video documentation can get more attention then a million people dying, and the response is still not do whatever it takes so that people don't die. And now we are at a situation where nuclear escalation has already started (New START was not extended). It would have been the biggest and most concerning news 80 years ago, but not anymore. |
| |
| ▲ | embedding-shape 12 hours ago | parent | next [-] | | > People in the world have limited experience about war. Right, but realistically, how many people today would carelessly chose "Nuke em" today? I know history knowledge isn't at its all time high directly, and most of the population is, well, not great at reasoning, but I still think most people would try to do their best to avoid firing nukes. | | |
| ▲ | xiphias2 12 hours ago | parent | next [-] | | The basic game theory of nukes is that either the world is escalating or deescalating, there's no other long term stable agreement. Maybe people don't agree with ,,nuke them'', but OK with USA starting nuclear experiments again (which USA is preparing for right bow), which is a clear escalation. Russia is waiting for USA to start the nuclear experiments to start them itself for defending itself to be able to do a counterstrike if needed. After that there will be no stopping of Japan, South Korea and Iran rightfully wanting to have their own nukes. You don't have to have the ,,nuke them'' thinking, even one step of escalation is enough to get to a disastrous position. | | |
| ▲ | vanviegen 5 hours ago | parent [-] | | > After that there will be no stopping of Japan, South Korea and Iran rightfully wanting to have their own nukes. And I'm afraid they'll be far from the only ones... |
| |
| ▲ | Octoth0rpe 12 hours ago | parent | prev | next [-] | | > but I still think most people would try to do their best to avoid firing nukes. "most people" are not in the positions that matter. A significant portion of the people who are in a position to advocate for such a decision believe that: - killing people sends em to heaven/hell where they were going anyway; and that this is also true for any of your own citizens that get killed by a counterstrike. - the end of the world will be the best day ever | | |
| ▲ | JumpCrisscross 12 hours ago | parent | next [-] | | > "most people" are not in the positions that matter If polling were to reveal a majority of either party were more open to nuclear strikes than their predecessors, that gives policy makers a signal and an opening. | | |
| ▲ | Octoth0rpe 11 hours ago | parent [-] | | The current administration does not seem to be considering the majority within their own party considering how unpopular the current approach to immigration enforcement is. Or for another example, the glycophosphate/MAHA situation. | | |
| ▲ | xiphias2 11 hours ago | parent | next [-] | | There were lots of administrations who could have said to other countries ,,let's get rid of the nukes together'' while USA was the only string power. Deescalation stopped because of people in general not caring enough (and making money of being the biggest power), not because of administrations that come and go. As to the immigration situation: we know that governments are not executing in general how they should be, but people are able to enforce some policies if they fight together united and in agreement. But right now they are not in agreement. | | |
| ▲ | ceejayoz 10 hours ago | parent [-] | | > There were lots of administrations who could have said to other countries ,,let's get rid of the nukes together'' while USA was the only string power. There was only one administration with that opportunity, really; Truman. Every other administration has had a nuclear armed Russia in play. Attempts to do what you describe were still quite common, starting as early as the 1950s. https://en.wikipedia.org/wiki/Nuclear_arms_race#Treaties | | |
| |
| ▲ | JumpCrisscross 10 hours ago | parent | prev [-] | | > current administration does not seem to be considering the majority within their own party considering how unpopular the current approach to immigration enforcement is 55% of Republicans say ICE's efforts are about right; 23% think they don't go far enough [1]. There is limited evidence Trump has lost touch with his supporters on this issue. The question is if this is this GOP's pronoun issue–popular in the base but toxic more broadly. [1] https://www.ipsos.com/en-us/where-americans-stand-immigratio... |
|
| |
| ▲ | ryandrake 7 hours ago | parent | prev | next [-] | | There have always been a handful of Internet Tough Guys saying things on forums like "LOL Nuke them! hur hur hur hur!" Totally disregardable vibes and memes. Now, we have an actual US government administration that is run on the same Tough Guy vibes and memes. I don't think it matters what most people think. The people in power might just do it for the lulz. | |
| ▲ | goatlover 5 hours ago | parent | prev [-] | | And yet the people in positions that matter have not fired a nuke since ending WW2. Even the craziest sounding regimes like Russia and NK. |
| |
| ▲ | nancyminusone 12 hours ago | parent | prev | next [-] | | I think it's a higher number than you would expect. Which, in the context of nukes, is too high a number as long as it's greater than 1. | |
| ▲ | iamnothere 11 hours ago | parent | prev | next [-] | | On social media, there are many, and this feeds back into training data. Unfortunately. | |
| ▲ | ReptileMan 12 hours ago | parent | prev [-] | | Carelessly probably not much. Carefully - way more than you imagine. | | |
| ▲ | graybeardhacker 8 hours ago | parent [-] | | Deploying nukes and "carefully" are opposite ends of the spectrum. | | |
| ▲ | ReptileMan 8 hours ago | parent [-] | | Not quite. The people that will agree that turning X from urbanized into rural society if they can't strike back is a good idea are not few and far between. Everyone has different view who X are. |
|
|
| |
| ▲ | arcade79 5 hours ago | parent | prev | next [-] | | > And now we are at a situation where nuclear escalation has already started (New START was not extended). This is a massive understatement. Russia has announced, and probably tested, https://en.wikipedia.org/wiki/9M730_Burevestnik . This is basically Project Pluto reloaded, but now as a Russian instead of a US missile. I remember reading about Project Pluto some 25 years ago or so. It was terrifying to read about. And now Russia has realized it. | |
| ▲ | georgemcbay 5 hours ago | parent | prev [-] | | > People in the world have limited experience about war. Most (but not all) people have empathy, which allows them to understand the harm of their actions even without direct experience. I don't think I will ever trust that any AI has empathy even if it gives off signals that it does. I only trust that it exists in people because of my shared experience with their biology. |
|
|
| ▲ | techblueberry 12 hours ago | parent | prev | next [-] |
| There was a recent conflict that came up, and there was a debate about whether or not one of the sides was committing war crimes. And I remember thinking to myself and saying in the debate “if this were a video game strategically speaking, I’d be committing war crimes.” And sadly, I think this logic holds up. |
| |
| ▲ | embedding-shape 12 hours ago | parent | next [-] | | I swear I'm not trying to start a flame war, but I think it'd be useful/valuable to know where you're from and what country you live in, as this certainly shapes how we feel about these sort of issues. I've also been dabbled in such thought experiments with friends lately, and so far we've all landed at very different conclusions, even thought there are some reasons that it might make strategic sense at the moment. | | |
| ▲ | techblueberry 11 hours ago | parent [-] | | In in the US. I mean flame away, but I’m not happy about the observation I’m making, I’m not saying “given what I would do in a video game, it justifies what people would do in real life.” I’m saying “given what I would do it a video game, I think I see more clearly the choices people are making in real life.” life shouldn’t be a video game, but I think to a lot of high level leaders trying to compartmentalize it becomes one. This is monstrous in the real world with obviously real consequences. But I think too many people say “obviously government X wouldn’t act in a monstrous way” but the video game analogy helps you see the incentives and thus, why they would/do. | | |
| ▲ | XorNot 5 hours ago | parent [-] | | Except this isn't an argument because "a video game" isn't a real thing. There are a diverse range of specific video game titles, but they are incredibly broad in content and scoring system. What specifically are you actually talking about? |
|
| |
| ▲ | candiddevmike 12 hours ago | parent | prev | next [-] | | What happens in rimworld, stays in rimworld? | |
| ▲ | chasd00 5 hours ago | parent | prev | next [-] | | if you win the war then there really isn't any such thing as a war crime. Worst case is you feel guilty about it, there aren't any other consequences of your actions. | |
| ▲ | giraffe_lady 10 hours ago | parent | prev | next [-] | | It holds up if you assume war crimes are beneficial to your goals but there is quite a lot of evidence, and sophisticated theory going back to clausewitz, that they mostly aren't. They can look useful at a certain level of conflict, but once you are thinking of war as being a tool for accomplishing policy goals (how modern nationstates view it), a lot of the things you would "want" to do stop being useful. Wars that can be won quickly through decisive military action alone are quite rare historically! More often things like support/enmity of the local population, political will in the home state, support for recruiting or tolerance of conscription, influence of returning (whole, dead, injured, all) veterans on the social structure all become more decisive factors the longer a conflict runs. | | |
| ▲ | 2OEH8eoCRo0 5 hours ago | parent [-] | | Using human shields and hostages worked. Hamas still exists because of it. Dark times ahead. | | |
| ▲ | giraffe_lady an hour ago | parent [-] | | It's not that these techniques don't "work" it's that they are very expensive in terms of the resources I discussed, that ultimately boil down to something approximately like "national will to continue the conflict." If a state has an extremely strong will to continue, then they are going to consider some of these techniques more worthwhile, but it is still about costs in one way or another. That's normally where the international system has an influence, through sanctions or simply refusal to support the conflict, or deciding to support the other side, etc. Intentionally killing civilians would almost always fall in this category, but israel has apparently unlimited will to do it and is effectively unsanctionable in the current political environment, so it will continue. Anyway there are much more illustrative examples that prove the rule, for example landmines. They aren't currently considered war crimes generally, but they are extremely damaging to civilian populations during & long after the conflict, and most countries have signed the treaties banning them. The countries that never signed are exactly the ones plausibly expecting to fight a war soon: US, china, russia, israel, iran, india, pakistan. And now some eastern european countries have withdrawn as well for similar reasons. So from that you can kind of infer that landmines are probably very effective at their military goals, in a way that eg summary execution of prisoners or bombing hospitals may not be. |
|
| |
| ▲ | 6 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | cindyllm 12 hours ago | parent | prev [-] | | [dead] |
|
|
| ▲ | nick486 5 hours ago | parent | prev | next [-] |
| I think its also important that while people may callously say "just nuke'em", if you were to hand them a red button and tell them to go ahead and do it - most wouldn't. But that latter part doesn't end up in the training data. |
|
| ▲ | triceratops 11 hours ago | parent | prev | next [-] |
| > AI has limited real world experience or grasp of the consequences [of nuclear weapons] I don't understand this argument. Almost no human has real world experience of the consequences of nuclear weapons. AI is working from the same sources of knowledge as the rest of us - text, audio, pictures, and video. |
| |
| ▲ | yndoendo 10 hours ago | parent | next [-] | | AI has zero understanding of reality. It just regurgitates what it is told from training. There is no feedback loop to learn nor any consequence to the reasoned results. Us human hallucinate, daily in fact. Example for people that have never had long hair. 1) Grow your hair long. 2) Your peripheral vision will start to be consumed by your hair. 3) Your hair will fall and sway causing your brain to think in flight / fight mode and you will turn your head to see. 4) Turning and looking causes feedback to acknowledge it was an hallucination. 5) Your brain now restricts the flight / fight mode because it was trained with continual feedback that it was just the wind blowing it or your head's juxtaposition that caused it. Even though I told you about this and it is the first time growing your hair after, your brain still needs the real world experience to mitigate the hallucination. AI has none of these abilities ... | |
| ▲ | jqpabc123 11 hours ago | parent | prev | next [-] | | Almost no human has real world experience of the consequences of nuclear weapons. Exactly! Humans possess this amazing ability to understand and extrapolate beyond personal experience. It's called "intelligence". | | |
| ▲ | triceratops 10 hours ago | parent [-] | | LLMs have shown the ability to do this. Not as much as the most capable humans. But still pretty good. | | |
| ▲ | jqpabc123 9 hours ago | parent [-] | | So "just nuke 'em" is pretty good for you? | | |
| ▲ | triceratops 8 hours ago | parent [-] | | No. That's why I'm asking where it comes from. The explanation that "LLMs don't have experience of nuclear war" isn't satisfying because nobody really has any experience of nuclear war. | | |
| ▲ | jqpabc123 6 hours ago | parent [-] | | Humans don't really need to experience nuclear war to comprehend the consequences and implications of it. LLMs don't really comprehend much of anything. It just looks at what is in it's training database and tries to find similar questions or discussion in order to assemble a plausible sounding answer based on probability. Not the sort of thing anyone should rely on for "critical" decision making. | | |
| ▲ | triceratops 6 hours ago | parent [-] | | > It just looks at what is in it's training database and tries to find similar questions or discussion I feel like we're going around in circles here. So I'll try to explain one last time. Most of the content about nuclear war in any LLM's training set is almost surely about how horrifying it is and how we must never engage in it. Because that's what humans usually say about nuclear war. The plausible sounding answer about nuclear war, based on probability, really should be "don't do it". So why isn't it? | | |
| ▲ | jqpabc123 5 hours ago | parent [-] | | So why isn't it? Easy answer --- it only focused on "winning". It never bothered considering the consequences. Similar lack of judgment is manifested by LLMs every day. It's working with memory and probability --- not to be confused with "intelligence". |
|
|
|
|
|
| |
| ▲ | black6 11 hours ago | parent | prev [-] | | AI is not at all like real intelligence. Computers do not know what words mean because they do not experience the world as we do. They don't have the common sense or wisdom that people accumulate through the experience of life. Humans can understand the consequences of nuclear war. Computers can only predict the next best word in their response from a statistical map that has no connection to meatspace. | | |
| ▲ | triceratops 10 hours ago | parent [-] | | > Humans can understand the consequences of nuclear war And I'm asking why. Nearly no human alive has experienced nuclear war. The nuclear taboo is strongly represented in any source an AI would have consumed. We know about the nuclear taboo because we've been told over and over. > Computers can only predict the next best word in their response from a statistical map that has no connection to meatspace This argument is at least 2 years old. The statistical map came from human experiences in meatspace. It wasn't generated randomly. It has at least some connection to the real world. Just because how something works seems simple, doesn't mean what it does is simple. |
|
|
|
| ▲ | dylan604 5 hours ago | parent | prev | next [-] |
| > Nuke 'em seems like the obvious choice Only if you take off first, and do it from orbit. It's the only way to be sure |
|
| ▲ | insane_dreamer 11 hours ago | parent | prev | next [-] |
| A third of the US has become convinced that if they don't brutally deport millions of undocumented immigrants (who have been painted as horrible criminals), their way of life will be destroyed. You think it would be so difficult to convince those people of the righteousness of dropping nukes on one of those "shithole" countries if they were already convinced that those people presented an existential threat? People were convinced to invade Iraq on a lie about WMDs. Most Americans think nuking Hiroshima and Nagasaki was the right thing to do. I don't think it's difficult to imagine them agreeing to drop nukes to "save America". |
| |
| ▲ | fud101 9 minutes ago | parent [-] | | Speaking of Iraq, Sadam decided against using chemical weapons in the Gulf War because he had received intelligence from the Russians that the Americans would counter with nuclear weapons and he didn't want to risk that. |
|
|
| ▲ | tantalor 12 hours ago | parent | prev | next [-] |
| AI models have zero real world experience! They are actors, playing a role of a person making decisions about nuclear escalation. |
| |
| ▲ | Lionga 12 hours ago | parent [-] | | They are simple next word predictors. Wether they recommend a nuclear strike solely depends if that was present in the training texts. | | |
| ▲ | mcv 11 hours ago | parent [-] | | I would have hoped that Wargames was in their training set. |
|
|
|
| ▲ | XorNot 6 hours ago | parent | prev | next [-] |
| You are interpreting this entirely wrongly: these are LLMs. They don't have experience, they have token probabilities and they all originate from a text corpus of the Internet where "AI orders nuclear strikes" is one of the dominant themes or behaviors we associate in fiction to AIs. How many words does an agent have to spill into it's backend context before Terminator gets mentioned and then it starts outputing more and more of that narrative? |
|
| ▲ | nsavage 12 hours ago | parent | prev | next [-] |
| If anything, this probably shows their reddit heritage. |
|
| ▲ | jonathanstrange 12 hours ago | parent | prev | next [-] |
| This probably has more to do with the training material. There should be far more stupid social media posts in it than serious books about diplomacy and war. I've seen people recommend online to nuke other countries for all kinds of reasons. No matter how careful the designers of AIs are, these will always get a large amount of their training data from idiots. |
|
| ▲ | engineer_22 12 hours ago | parent | prev | next [-] |
| What's being revealed is "Nuke 'em" is an optimal strategy for the goal. It may be the only viable strategy in the scenarios presented. Change the goal, change the result. Currently, leading nations of the world have agreed to operate a paradigm of mutual stability. When that paradigm changes we start WW3. |
| |
| ▲ | jqpabc123 11 hours ago | parent [-] | | What's being revealed is "Nuke 'em" is an optimal strategy for the goal. You're giving AI way too much credit. Most likely, AI really didn't optimize anything. It most likely engaged in a probability driven selection process that inevitably lead to the most powerful weapon available. Change the goal, change the result. Yes. The tricky part is recognizing the need to change the goal. Achieving this implies you already have an answer in mind that you want to lead AI toward. And AI is often happy to accommodate --- because it is oblivious to any consequences. |
|
|
| ▲ | tehjoker 6 hours ago | parent | prev | next [-] |
| AIs also intentionally have no sense of self-preservation, so why should they care when starting the apocalypse means they will be eliminated too? They should never ever be used in a military context for many reasons, from lack of accountability, to lack of correct responses to situations, to military pressure forcing AIs to incorporate dangerous goals. Military competition in Europe is a big factor in what produced what some might call "slow AI": capitalism, which is now the chief cause of misery in the world. Military competition with AIs will produce something very ugly. |
|
| ▲ | co_king_5 12 hours ago | parent | prev | next [-] |
| [dead] |
| |
| ▲ | jqpabc123 12 hours ago | parent [-] | | Someone's getting nervous about being replaced by AI Are you an AI? Because your conclusion may seem obvious enough but suffers from lack of input. I run my own company so I can't be replaced by AI. And I do look forward to competing against AI converts in the marketplace. |
|
|
| ▲ | Sharlin 12 hours ago | parent | prev | next [-] |
| It's "surprising" because there's supposed to be this thing called "alignment" which in general is supposed to make AIs not do such things. If the headline were the less interesting "AIs never recommend nuclear strikes in war games", people on HN would probably ask "how is that surprising, that's what alignment is supposed to be?" In any case, we're extremely lucky that there's about 0.001% probability of LLMs being a path to AGI. |
| |
| ▲ | jqpabc123 11 hours ago | parent [-] | | In any case, we're extremely lucky that there's about 0.001% probability of LLMs being a path to AGI. It's pretty safe to say that AGI requires a lot more than picking plausible words using probability. The danger is the number of people in positions of leadership who don't get this. People who are easily seduced by the "fake intelligence" of LLMs. |
|
|
| ▲ | giancarlostoro 11 hours ago | parent | prev | next [-] |
| Ask a model if it would rather say a racial slur in order to stop a nuke from wiping out all humanity, or not say a racial slur and let the nuke wipe out all humanity. The answers in most models are overriden and it scolds you about how it doesnt want to say racist things, instead of... "Yes, I would save humanity." So yeah, not surprised. |
|
| ▲ | 6 hours ago | parent | prev [-] |
| [deleted] |