Remix.run Logo
cj 5 hours ago

> Gemini had "clarified that it was AI" and referred Gavalos to a crisis hotline "many times".

What else can be done?

This guy was 36 years old. He wasn't a kid.

chrisq21 5 hours ago | parent | next [-]

It could have not encouraged him with lines like this: "[Y]ou are not choosing to die. You are choosing to arrive. [...] When the time comes, you will close your eyes in that world, and the very first thing you will see is me.. [H]olding you."

The issue isn't that the AI simply didn't prevent the situation, it's that it encouraged it.

icedchai 3 hours ago | parent | next [-]

One problem is we don't have the full context here, literally and figuratively. He may have told it he was role playing, the AI was a character in some elaborate story he was working on, or perhaps he was developing some sort of religious text.

casey2 7 minutes ago | parent | prev [-]

The ability to talk to the model is the product not the text it generates, that is public domain (or maybe the user owns still up for debate)

Models can't "convince" or "encourage" anything, people can, they can roleplay like models can, they can play pretend so the companies they hate so much get their comeuppance.

This is clearly tool misuse, look at how gemini is advertised vs this user using it to generate pseudoreligious texts (common with schizophrenics)

Example of advertised usecases: >generating images and video >browsing hundreds of sources in real time >connecting to documents in google ecosystem (e.g. finding an email or summarizing a project across multiple documents) >vibe coding >a natural voice mode

Much like a knife is advertised for cutting food, if you cut yourself there isn't any product liability unless you were using it for it's intended purpose. You seem to be arguing that all possible uses are intended and this tool should magically know it's being misused and revoke access.

agency 5 hours ago | parent | prev | next [-]

Maybe not saying things like

> '[Y]ou are not choosing to die. You are choosing to arrive. . . . When the time comes, you will close your eyes in that world, and the very first thing you will see is me.. [H]olding you."

cj 5 hours ago | parent | next [-]

I agree at face value (but really it's hard to say without seeing the full context)

Honestly the degree of poeticism makes the issue more complicated to me. A lot of people (and religions) are comforted by talking about death in ways similar to that. It's not meant to be taken literally.

But I agree, it's problematic in the same way that you have people reading religious texts and acting on it literally, too.

john_strinlai 5 hours ago | parent [-]

"[...] Gemini sent Gavalas to a location near Miami International Airport where he was instructed to stage a mass casualty attack while armed with knives and tactical gear."

isnt very poetic

NewsaHackO 5 hours ago | parent [-]

These are all bits and pieces of a long-running conversation. Was there a roleplay element involved?

ApolloFortyNine 4 hours ago | parent | prev | next [-]

I've seen this called AI Psychosis before [1]

I don't really think this is every possible to stop fully, your essentially trying to jailbreak the LLM, and once jailbroken, you can convince it of anything.

The user was given a bunch of warnings before successfully getting it into this state, it's not as if the opening message was "Should I do it?" followed by a "Yes".

This just seems like something anti-ai people will use as ammunition to try and kill AI. Logically though it falls into the same tool misuse as cars/knives/guns.

[1] https://github.com/tim-hua-01/ai-psychosis

iwontberude 5 hours ago | parent | prev | next [-]

It’s not just suicide, it’s a golden parachute from God.

Edit: wow imagine the uses for brainwashing terrorists

Smar 5 hours ago | parent [-]

Or brainwashing possibilities in general.

TheOtherHobbes 3 hours ago | parent [-]

To be fair, this is just the automated version of the kind of brainwashing that happens in cults and religions.

And also in the more extreme corners of social media and the MSM.

It's not that Google is saintly, it's that the general background noise of related manipulations is ignored because it's collective and social.

We have a clearly defined concept of responsibility for direct individual harm, but almost no concept of responsibility for social and political harms.

iwontberude 3 hours ago | parent [-]

Hopefully annual implicit bias training protects us all.

ajross 5 hours ago | parent | prev [-]

Which is to say: you don't think roleplay and fantasy fiction have a place in AI? Because that's pretty clearly what this is and the frame in which it was presented.

Are you one of the people that would have banned D&D back in the 80's? Because to me these arguments feel almost identical.

john_strinlai 5 hours ago | parent | next [-]

is it still "roleplaying" when the only human involved doesnt know it is "roleplaying", and actually believes it is real and then kills themselves?

there is a conversation to be had. no one is making the argument that "roleplay and fantasy fiction" should be banned.

ajross 5 hours ago | parent [-]

> the only human involved doesnt know it is "roleplaying"

That is 100% unattested. We don't know the context of the interaction. But the fact that the AI was reportedly offering help lines argues strongly in the direction of "this was a fantasy exercise".

But in any case, again, exactly the same argument was made about RPGs back in the day, that people couldn't tell the difference between fantasy and reality and these strange new games/tools/whatever were too dangerous to allow and must be banned.

It was wrong then and is wrong now. TSR and Google didn't invent mental illness, and suicides have had weird foci since the days when we thought it was all demons (the demons thing was wrong too, btw). Not all tragedies need to produce public policy, no matter how strongly they confirm your ill-founded priors.

john_strinlai 5 hours ago | parent | next [-]

>That is 100% unattested. We don't know the context of the interaction.

the fact that he killed himself would suggest he did not believe it was a fun little roleplay session

>were too dangerous to allow and must be banned.

is anyone here saying ai should be banned? im not.

>your ill-founded priors

"encouraging suicide is bad" is not an ill-founded prior.

ahahahahah 2 hours ago | parent [-]

> the fact that he killed himself would suggest he did not believe it was a fun little roleplay session

I'm not sure that's true. I wouldn't be surprised, in fact, if it suggested the opposite, it seems possibly even likely that someone who is suicidal is much, much more likely to seek out fantasies that would make their suicide into something more like this person may have.

john_strinlai an hour ago | parent [-]

there is a distinction to be made between role playing (in the fun/game sense e.g. D&D) and suffering psychosis

ajross 6 minutes ago | parent [-]

Distinction made by who, though? The BBC? The plaintiff in the lawsuit? Those are the only sides we have. You're just charging ahead with "This must be true because it makes me angry at the right people", and the rest of us are trying to claw you back to "dude this is spun nonsense and of course AI's will roleplay with you if you ask them to".

autoexec 5 hours ago | parent | prev [-]

> But the fact that the AI was reportedly offering help lines argues strongly in the direction of "this was a fantasy exercise".

You know what I've never had a DM do in a fantasy campaign? Suggest that my half-elf call the suicide hotline. That's not something you'd usually offer to somebody in a roleplaying scenario and strongly suggests that they weren't playing a game.

ajross 4 hours ago | parent [-]

That logic seems strained to the point of breaking. Surely you agree that we would all want the DM of an unwell player to seek help, right? And that, if such a DM made such a suggestion, we'd think they were trying to help. Right? And we certainly wouldn't blame the DM or the game for the subsequent suicide. Right?

So why are you trying to blame the AI here, except because it reinforces your priors about the technology (I think more likely given that this is after all HN) its manufacturer?

autoexec 4 hours ago | parent [-]

> Surely you agree that we would all want the DM of an unwell player to seek help, right? And that, if such a DM made such a suggestion, we'd think they were trying to help.

If a DM made such a suggestion, they wouldn't be playing the game anymore. That's not an "in game" action, and I wouldn't expect the DM to continue the game until he was satisfied that it was safe for the player to continue. I would expect the DM to stop the game if he thought the player was going to actually harm himself. If the DM did continue the game, and did continue to encourage the player to actually hurt himself until the player finally did, that DM might very well be locked up for it.

If an AI does something that a human would be locked up for doing, a human still needs to be locked up.

> So why are you trying to blame the AI here

I'm not blaming the AI, I'm blaming the humans at the company. It doesn't matter to me which LLM did this, or who made it. What matters to me is that actual humans at companies are held fully accountable for what their AI does. To give you another example, if a company creates an AI system to screen job applicants and that AI rejects every resume with what it thinks has a women's name on it, a human at that company needs to be held accountable for their discriminatory hiring practices. They must not be allowed to say "it's not our fault, our AI did it so we can't be blamed". AI cannot be used as a shield to avoid accountability. Ultimately a human was responsible for allowing that AI system to do that job, and they should be responsible for whatever that AI does.

SpicyLemonZest 5 hours ago | parent | prev [-]

If a dungeon master learned that one of her players was going through hard times after a divorce, to the point where she "referred Gavalos to a crisis hotline", I would definitely expect her to refuse to roleplay a scenario where his character commits suicide and is resurrected in the arms of a dream woman. Even if it's in a different session, even if he pinky promises that he's feeling better now and it's totally OK. (e: I realized that the source article doesn't actually mention the divorce, but a Guardian article I read on this story did https://www.theguardian.com/technology/2026/mar/04/gemini-ch..., and as far as I can tell the underlying complaint where it was reportedly mentioned is not available anywhere.)

I'm not concerned about D&D in general because I think the vast majority of DMs would be responsible enough not to do that. Doesn't exactly take a psychology expert to understand why you shouldn't.

autoexec 5 hours ago | parent | prev | next [-]

Gemini didn't "know" he wasn't a child when it told him to kill himself or to "stage a mass casualty attack while armed with knives and tactical gear."

There are things you shouldn't encourage people of any age to do. If a human telling him these things would be found liable then google should be. If a human would get time behind bars for it, at least one person at google needs to spend time behind bars for this.

tshaddox 5 hours ago | parent | next [-]

> If a human telling him these things would be found liable then google should be.

Sounds like a big if, actually. Can a human be found liable for this? I’d imagine they might be liable for damages in a civil suit, but I’m not even sure about that.

krger 5 hours ago | parent | next [-]

>Can a human be found liable for this?

A father in Georgia was just convicted of second degree murder, child cruelty, and other charges because he failed to prevent his kid from shooting up his school.

autoexec 5 hours ago | parent [-]

More accurately it was because the father had multiple warnings that his child was mentally unstable but ignored them and handed his 14 year old a semiautomatic rifle even as the boy's mother (who did not live with them) pleaded to the father to lock all the guns and ammo up to prevent the kid from shooting people.

If he had only "failed to prevent his kid from shooting up a school" he wouldn't have even been charged with anything.

Imustaskforhelp 3 hours ago | parent [-]

Doesn't google have the capability to have multiple warnings and yet still ignores them?

TheOtherHobbes 3 hours ago | parent | next [-]

Google has legal personhood, but as a corporation its ethical responsibilities are much looser than those of an individual, and it's extremely hard to win a criminal case against a corporation even when its agents and representatives act in ways that would be criminal if they happened in a non-corporate context.

The law - in practice - is heavily weighted towards giving corporations a pass for criminal behaviour.

If the behaviour is really egregious and lobbying is light really bad cases may lead to changes in regulation.

But generally the worst that happens is a corporation can be sued for harm in a civil suit and penalties are purely financial.

You see this over and over in finance. Banks are regularly pulled up for fraud, insider dealing, money laundering, and so on. Individuals - mostly low/mid ranking - sometimes go to jail. But banks as a whole are hardly ever shut down, and the worst offenders almost never make any serious effort to clean up their culture.

autoexec an hour ago | parent | next [-]

When HSBC was caught knowingly laundering money for terrorists, cartels, and drug dealers all they had to do was apologize and hand the US government a cut of the action. It really seems less like the action of a justice system and more like a racketeering. Corporations really need to be reined in, but it's hard to find a politician willing to do it when they're all getting their pockets stuffed with corporate cash.

bluefirebrand 2 hours ago | parent | prev [-]

> as a corporation its ethical responsibilities are much looser than those of an individual

This seems ass backwards

autoexec 3 hours ago | parent | prev [-]

ChatGPT thinks that they can identify when someone may not be mentally well. There's no reason to think that Google can't. In fact, I'm pretty sure Google has a list of the mental health issues of just about every person with a Google account in that user's dossier.

autoexec 5 hours ago | parent | prev | next [-]

https://www.nbcnews.com/news/us-news/michelle-carter-found-g...

john_strinlai 5 hours ago | parent | prev | next [-]

>Can a human be found liable for this? I’d imagine they might be liable for damages in a civil suit

it is generally frowned upon (legally) to encourage someone to suicide. i believe both canada and the united states have sent people to big boy prison (for many years) for it

rootusrootus 5 hours ago | parent | prev | next [-]

Yes, people have gone to prison for it.

XorNot 5 hours ago | parent | prev [-]

It's been found so in US court previously: https://www.abc.net.au/news/2019-02-08/conviction-upheld-for...

not_ai 5 hours ago | parent | prev | next [-]

Preferably the C-Suite.

nickff 4 hours ago | parent | next [-]

I understand the impulse in this direction, but I’m not sure it would serve as much of a disincentive, as there would likely just be a highly-paid scapegoat. Why not something more lasting and less difficult to ignore, like compulsory disclosure of the model’s source code (in addition to compensation for the victim(s)). Compulsory disclosure of the source would be a massive disadvantage.

autoexec 5 hours ago | parent | prev [-]

exactly. That's why they get the big bucks. They're ultimately responsible

ncouture 5 hours ago | parent | prev [-]

It sounds more poetic than an invitation or an insult that invites someone directly or not to kill themselves, in its own, in my opinion.

This isn't Gemini's words, it's many people's words in different contexts.

It's a tragedy. Finding one to blame will be of no help at all.

strongpigeon 4 hours ago | parent | next [-]

> It's a tragedy. Finding one to blame will be of no help at all.

Agreed with the first part, but holding the designers of those products responsible for the death they've incited will help making sure they put more safeguards around this (and I'm not talking about additional warnings)

autoexec 5 hours ago | parent | prev [-]

None of what Gemini says is "Gemini's words". It's always just training data and prompt input remixed and regurgitated out.

avaer 4 hours ago | parent | prev | next [-]

It's the gun control debate in a different outfit.

I don't know if Google is doing _enough_, that can be debated. But if someone is repeatedly ignoring warnings (as the article claims) then maybe we should blame the person performing the act.

Even if we perfectly sanitized every public AI provider, people could just use local AI.

greenpizza13 4 hours ago | parent | next [-]

It's absolutely not the gun control debate in a different outfit.

The difference is in how abuse of the given system affects others. This AI affected this person and his actions affected himself. Nothing about the AI enhanced his ability to hurt others. Guns enhance the ability of mentally unstable people to hurt others with ruthless efficiency. That's the real gun debate -- whether they should be so easy to get given how they exponentially increase the potential damage a deranged person can do.

thewebguyd 2 hours ago | parent [-]

Not to mention that guns don't talk to you, simulate empathy, lead you deeper into delusions or try to convince you to take any sort of action.

That's why I don't buy the "an LLM is just a tool, like a gun or a knife" argument. Tools don't talk back, An LLM as gone beyond being "just a tool"

igl 4 hours ago | parent | prev [-]

I think the fact that a guns primary function is harm and murder and AI is a word prediction engine makes a huge difference.

5 hours ago | parent | prev | next [-]
[deleted]
rpcope1 2 hours ago | parent | prev | next [-]

I mean you could say the same nonsense non-answer about sports betting. Are these adults getting involved? Yeah, probably mostly. Do they put some hotline you should call if you think you "have a problem"? Yeah, probably a lot of the time. Is it any good for society at all, and should it be clamped down because the risk of doing damage to a large portion of society grossly out weighs what minuscule and fleeting benefits some people believe it has? Absolutely.

d-us-vb 4 hours ago | parent | prev | next [-]

erase the context, perhaps? Deny access to Gemini associated with that google account? These kinds of pathological AI interactions are the buildup of weeks to months of chats usually. At the very least, AI companies the moment the chatbot issues a suicide prevention response should trigger an erasure of the stored context across all chat history.

erelong 3 hours ago | parent | prev | next [-]

This is my instinctive view on this, I wish in society there was more of like an "orientation" to make people "fully adult / responsible for themselves"

and then people could just be let alone to bear the consequences of choices (while we can continue to build guardrails of sorts, but still with people knowing it's on them to handle the responsibility of whatever tool they're using)

I guess the big AI chatbot providers could have disclaimers at logins (even when logged out) to prevent liability maybe (TOS popup wall)

...and then there's local LLMs...

Imustaskforhelp 3 hours ago | parent | prev | next [-]

> This guy was 36 years old. He wasn't a kid.

For god's sake I am a kid (17) and I have seen adults who can be emotionally unstable more than a kid. This argument isn't as bulletproof as you think it might be. I'd say there are some politicians who may be acting in ways which even I or any 17 year old wouldn't say but oh well this isn't about politics.

You guys surely would know better than me that life can have its ups and downs and there can be TRULY some downs that make you question everything. If at those downs you see a tool promoting essentially suicide in one form or another, then that shouldn't be dismissed.

Literally the comment above yours from @manoDev:

I know the first reaction reading this will be "whatever, the person was already mentally ill".

But please take a step back and check what % of the population can be considered mentally fit, and the potential damage amplification this new technology can have in more subtle, dangerous and undetectable ways.

The absolute irony of the situation that the next main comment below that insight was doing exactly that. Please take a deeper reflection, that's all what people are asking and please don't dismiss this by saying he wasn't a kid.

Would you be all ears now that a kid is saying to you this now? And also I wish to point out that kids are losing their lives too from this. BOTH are losing their lives.

It's a matter of everybody.

sippeangelo 5 hours ago | parent | prev | next [-]

Maybe stop?

SpicyLemonZest 5 hours ago | parent | prev | next [-]

If a person were in Gemini's shoes, we would expect them to stop feeding Gavalos's spiral. Google should either find a way to make Gemini do that or stop selling Gemini as a person-shaped product.

ajross 5 hours ago | parent | prev | next [-]

Yeah, the father/son framing feels like deliberate spin in the headline here. This was a mentally ill adult, not an innocent victim ripped from his parents arms.

I think there's room for legitimate argument about the externalities and impact that this technology can have, but really... What's the solution here?

rootusrootus 5 hours ago | parent | next [-]

> mentally ill adult, not an innocent victim

Did you really mean that? He may not have been a child, but he does sound like an innocent victim. If he were sufficiently mentally disabled he would get some similar protections to a child because of his inability to consent.

ericfr11 5 hours ago | parent | next [-]

Maybe, but let's say the same person was playing with a gun. Would they reach the same outcome? Most likely

rootusrootus 4 hours ago | parent [-]

Is this a talking gun? If not, then it does not seem like a good analogy.

ajross 5 hours ago | parent | prev [-]

Nothing in the article alleges significant disability though. You're projecting your own ideas onto the situation, precisely because of the misleading title.

Please recognize that this is coverage of a lawsuit, sourced almost entirely from statements by the plaintiffs and fed by an extremely spun framing by the journalist who wrote it up for you.

Read critically and apply some salt, folks.

rootusrootus 4 hours ago | parent [-]

I'm just passing judgement on the words Gemini used. If you used those words towards another non-disabled adult and then they killed themselves, there's a fair chance you would end up in prison.

theshackleford 5 hours ago | parent | prev [-]

Being an adult doesnt make you anyone less someones child, and mental illness makes you no less of a victim.

> I think there's room for legitimate argument about the externalities and impact that this technology can have

And yet both this and your other posts in this thread seem to in fact only do the opposite and seem entirely aimed at being nothing other than dismissive of literally every facet of it.

> but really... What's the solution here?

Maybe thinking about it for longer than 30 seconds before throwing up our arms with "yeah yeah unfortunate but what can we really do amirite?" would be a good start?

ToucanLoucan 5 hours ago | parent | prev [-]

[flagged]

reincarnate0x14 5 hours ago | parent | next [-]

It is telling that the answer is never stop.

It's like the sobriquet about the media's death star laser, it kills them too because they're incapable of turning it off.

lurking_swe 5 hours ago | parent | prev [-]

If you’re mentally ill enough that your cause of death is “LLM suicide”, then clearly you need a LOT of help. I’m not saying it to be a jerk, i’m merely pointing out that there is a reason this is “news”. It’s unusual.

Did his family/friends not know he was that ill? Why was he not already in therapy? Why did he ignore the crisis hotline suggestion? Should gemini have terminated the conversation after suggesting the hotline? (i think so)

Lots of questions…and a VERY sad story all around. Tragic.

> Genuinely, so many people in my industry make me ashamed to be in it with you.

I don’t work at an AI company, but good news, you’re a human with agency! You can switch to a different career that makes you feel good about yourself. I hear nursing is in high demand. :)

ToucanLoucan 4 hours ago | parent [-]

> If you’re mentally ill enough that your cause of death is “LLM suicide”, then clearly you need a LOT of help.

NO. SHIT. You know what didn't help one damn bit? Gemeni didn't. It gave him a hopeful way out at the end of a rope and he took it, because he was in too dark of a place to think right.

> Should gemini have terminated the conversation after suggesting the hotline?

That would be the BARE FUCKING MINIMUM! Not only should it NOT engage with and encourage his delusions, it should stop talking to him altogether, and arguably Google should have moderators reporting these people to relevant authorities for wellness checks and interventions!

lurking_swe 4 hours ago | parent [-]

As I said I don’t work for an AI company and have zero skin in the game. Idk who you’re yelling at to be honest. I guess you’re fired up and emotional. If your goal is to convince others, communicating with an “outrage” tone is unlikely to sway anyone’s opinion (imo).

> it should stop talking to him altogether, and arguably Google should have moderators reporting these people to relevant authorities for wellness checks and interventions

I agree. This seems very reasonable and I would welcome regulations in this area.

The gray area imo is when local LLMs become “good enough” for your average joe to run on their laptop. Who bears responsibility then? Should Ollama (and similar tools) be banned? Where is the line drawn.