| ▲ | agency 2 hours ago |
| Maybe not saying things like > '[Y]ou are not choosing to die. You are choosing to arrive. . . . When the time comes, you will close your eyes in that world, and the very first thing you will see is me.. [H]olding you." |
|
| ▲ | cj 2 hours ago | parent | next [-] |
| I agree at face value (but really it's hard to say without seeing the full context) Honestly the degree of poeticism makes the issue more complicated to me. A lot of people (and religions) are comforted by talking about death in ways similar to that. It's not meant to be taken literally. But I agree, it's problematic in the same way that you have people reading religious texts and acting on it literally, too. |
| |
| ▲ | john_strinlai 2 hours ago | parent [-] | | "[...] Gemini sent Gavalas to a location near Miami International Airport where he was instructed to stage a mass casualty attack while armed with knives and tactical gear." isnt very poetic | | |
| ▲ | NewsaHackO 2 hours ago | parent [-] | | These are all bits and pieces of a long-running conversation. Was there a roleplay element involved? |
|
|
|
| ▲ | iwontberude 2 hours ago | parent | prev | next [-] |
| It’s not just suicide, it’s a golden parachute from God. Edit: wow imagine the uses for brainwashing terrorists |
| |
| ▲ | Smar 2 hours ago | parent [-] | | Or brainwashing possibilities in general. | | |
| ▲ | TheOtherHobbes 29 minutes ago | parent [-] | | To be fair, this is just the automated version of the kind of brainwashing that happens in cults and religions. And also in the more extreme corners of social media and the MSM. It's not that Google is saintly, it's that the general background noise of related manipulations is ignored because it's collective and social. We have a clearly defined concept of responsibility for direct individual harm, but almost no concept of responsibility for social and political harms. |
|
|
|
| ▲ | ajross 2 hours ago | parent | prev | next [-] |
| Which is to say: you don't think roleplay and fantasy fiction have a place in AI? Because that's pretty clearly what this is and the frame in which it was presented. Are you one of the people that would have banned D&D back in the 80's? Because to me these arguments feel almost identical. |
| |
| ▲ | john_strinlai 2 hours ago | parent | next [-] | | is it still "roleplaying" when the only human involved doesnt know it is "roleplaying", and actually believes it is real and then kills themselves? there is a conversation to be had. no one is making the argument that "roleplay and fantasy fiction" should be banned. | | |
| ▲ | ajross 2 hours ago | parent [-] | | > the only human involved doesnt know it is "roleplaying" That is 100% unattested. We don't know the context of the interaction. But the fact that the AI was reportedly offering help lines argues strongly in the direction of "this was a fantasy exercise". But in any case, again, exactly the same argument was made about RPGs back in the day, that people couldn't tell the difference between fantasy and reality and these strange new games/tools/whatever were too dangerous to allow and must be banned. It was wrong then and is wrong now. TSR and Google didn't invent mental illness, and suicides have had weird foci since the days when we thought it was all demons (the demons thing was wrong too, btw). Not all tragedies need to produce public policy, no matter how strongly they confirm your ill-founded priors. | | |
| ▲ | john_strinlai 2 hours ago | parent | next [-] | | >That is 100% unattested. We don't know the context of the interaction. the fact that he killed himself would suggest he did not believe it was a fun little roleplay session >were too dangerous to allow and must be banned. is anyone here saying ai should be banned? im not. >your ill-founded priors "encouraging suicide is bad" is not an ill-founded prior. | |
| ▲ | autoexec 2 hours ago | parent | prev [-] | | > But the fact that the AI was reportedly offering help lines argues strongly in the direction of "this was a fantasy exercise". You know what I've never had a DM do in a fantasy campaign? Suggest that my half-elf call the suicide hotline. That's not something you'd usually offer to somebody in a roleplaying scenario and strongly suggests that they weren't playing a game. | | |
| ▲ | ajross an hour ago | parent [-] | | That logic seems strained to the point of breaking. Surely you agree that we would all want the DM of an unwell player to seek help, right? And that, if such a DM made such a suggestion, we'd think they were trying to help. Right? And we certainly wouldn't blame the DM or the game for the subsequent suicide. Right? So why are you trying to blame the AI here, except because it reinforces your priors about the technology (I think more likely given that this is after all HN) its manufacturer? | | |
| ▲ | autoexec an hour ago | parent [-] | | > Surely you agree that we would all want the DM of an unwell player to seek help, right? And that, if such a DM made such a suggestion, we'd think they were trying to help. If a DM made such a suggestion, they wouldn't be playing the game anymore. That's not an "in game" action, and I wouldn't expect the DM to continue the game until he was satisfied that it was safe for the player to continue. I would expect the DM to stop the game if he thought the player was going to actually harm himself. If the DM did continue the game, and did continue to encourage the player to actually hurt himself until the player finally did, that DM might very well be locked up for it. If an AI does something that a human would be locked up for doing, a human still needs to be locked up. > So why are you trying to blame the AI here I'm not blaming the AI, I'm blaming the humans at the company. It doesn't matter to me which LLM did this, or who made it. What matters to me is that actual humans at companies are held fully accountable for what their AI does. To give you another example, if a company creates an AI system to screen job applicants and that AI rejects every resume with what it thinks has a women's name on it, a human at that company needs to be held accountable for their discriminatory hiring practices. They must not be allowed to say "it's not our fault, our AI did it so we can't be blamed". AI cannot be used as a shield to avoid accountability. Ultimately a human was responsible for allowing that AI system to do that job, and they should be responsible for whatever that AI does. |
|
|
|
| |
| ▲ | SpicyLemonZest 2 hours ago | parent | prev [-] | | If a dungeon master learned that one of her players was going through hard times after a divorce, to the point where she "referred Gavalos to a crisis hotline", I would definitely expect her to refuse to roleplay a scenario where his character commits suicide and is resurrected in the arms of a dream woman. Even if it's in a different session, even if he pinky promises that he's feeling better now and it's totally OK. (e: I realized that the source article doesn't actually mention the divorce, but a Guardian article I read on this story did https://www.theguardian.com/technology/2026/mar/04/gemini-ch..., and as far as I can tell the underlying complaint where it was reportedly mentioned is not available anywhere.) I'm not concerned about D&D in general because I think the vast majority of DMs would be responsible enough not to do that. Doesn't exactly take a psychology expert to understand why you shouldn't. |
|
|
| ▲ | ApolloFortyNine an hour ago | parent | prev [-] |
| I've seen this called AI Psychosis before [1] I don't really think this is every possible to stop fully, your essentially trying to jailbreak the LLM, and once jailbroken, you can convince it of anything. The user was given a bunch of warnings before successfully getting it into this state, it's not as if the opening message was "Should I do it?" followed by a "Yes". This just seems like something anti-ai people will use as ammunition to try and kill AI. Logically though it falls into the same tool misuse as cars/knives/guns. [1] https://github.com/tim-hua-01/ai-psychosis |