Remix.run Logo
fghorow 11 hours ago

Yes. ChatGPT "safely" helped[1] my friend's daughter write a suicide note.

[1] https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-h...

overgard 9 hours ago | parent | next [-]

I have mixed feelings on this (besides obviously being sad about the loss of a good person). I think one of the useful things about AI chat is that you can talk about things that are difficult to talk to another human about, whether it's an embarrassing question or just things you don't want people to know about you. So it strikes me that trying to add a guard rail for all the things that reflect poorly on a chat agent seems like it'd reduce the utility of it. I think people have trouble talking about suicidal thoughts to real therapists because AFAIK therapists have a duty to report self harm, which makes people less likely to talk about it. One thing that I think is dangerous with the current LLM models though is the sycophancy problem. Like, all the time chatGPT is like "Great question!". Honestly, most my questions are not "great", nor are my insights "sharp", but flattery will get you a lot of places.. I just worry that these things attempting to be agreeable lets people walk down paths where a human would be like "ok, no"

magicalhippo 9 hours ago | parent | next [-]

> Like, all the time chatGPT is like "Great question!".

I've been trying out Gemini for a little while, and quickly got annoyed by that pattern. They're overly trained to agree maximally.

However, in the Gemini web app you can add instructions that are inserted in each conversation. I've added that it shouldn't assume my suggestions as good per default, but offer critique where appropriate.

And so every now and then it adds a critique section, where it states why it thinks what I'm suggesting is a really bad idea or similar.

It's overall doing a good job, and I feel it's something it should have had by default in a similar fashion.

wolvoleo 3 hours ago | parent [-]

You can insert a custom default prompt on pretty much every AI under the sun these days, not just Gemini

FireBeyond 9 hours ago | parent | prev [-]

> One thing that I think is dangerous with the current LLM models though is the sycophancy problem. Like, all the time chatGPT is like "Great question!"

100%

In ChatGPT I have the Basic Style and Tone set to "Efficient: concise and plain". For Characteristics I've set:

- Warm: less

- Enthusiastic: less

- Headers and lists: default

- Emoji: less

And custom instructions:

> Minimize sycophancy. Do not congratulate or praise me in any response. Minimize, though not eliminate, the use of em dashes and over-use of “marketing speak”.

wolvoleo 3 hours ago | parent [-]

Yeah why are basically all models so sycophantic anyway. I'm so done with getting encouragement and appreciation of my choices even when they're clearly wrong.

I tried similar prompts but they didn't really work.

lbeckman314 11 hours ago | parent | prev | next [-]

https://archive.is/fuJCe

(Apologies if this archive link isn't helpful, the unlocked_article_code in the URL still resulted in a paywall on my side...)

fghorow 11 hours ago | parent | next [-]

Thank you. And shame on the NYT.

LeoPanthera 11 hours ago | parent | prev [-]

We probably shouldn't be using the "archive" site that hijacks your browser into DDOSing other people. I'm actually surprised HN hasn't banned it.

lbeckman314 10 hours ago | parent | next [-]

Oof TIL, thanks for the heads up that's a shame!

https://meta.stackexchange.com/questions/417269/archive-toda...

https://en.wikipedia.org/wiki/Wikipedia:Requests_for_comment...

https://gyrovague.com/2026/02/01/archive-today-is-directing-...

10 hours ago | parent [-]
[deleted]
observationist 10 hours ago | parent | prev | next [-]

Some of us have, and some of us still use it. The functionality and the need for an archive not subject to the same constraints as the wayback machine and other institutions outweighs the blackhat hijinks and bickering between a blogger and the archive.is person/team.

My own ethical calculus is that they shouldn't be ddos attacking, but on the other hand, it's the internet equivalent of a house egging, and not that big a deal in the grand scheme of things. It probably got gyrovague far more attention than they'd have gotten otherwise, so maybe they can cash in on that and thumb their nose at the archive.is people.

Regardless - maybe "we" shouldn't be telling people what sites to use or not use -if you want to talk morals and ethics, then you better stop using gmail, amazon, ebay, Apple, Microsoft, any frontier AI, and hell, your ISP has probably done more evil things since last tuesday than the average person gets up to in a lifetime, so no internet, either. And totally forget about cellular service. What about the state you live in, or the country? Are they appropriately pure and ethical, or are you going to start telling people they need to defect to some bastion of ethics and nobility?

Real life is messy. Purity tests are stupid. Use archive.is for what it is, and the value it provides which you can't get elsewhere, for as long as you can, because once they're unmasked, that sort of thing is gone from the internet, and that'd be a damn shame.

sonofhans 9 hours ago | parent [-]

My guess is that you’ve not had your house egged, or have some poverty of imagination about it. I grew up in the midwest where this did happen. A house egging would take hours to clean up, and likely cause permanent damage to paint and finishes.

Or perhaps you think it’s no big deal to damage someone else’s property, as long as you only do it a little.

Jon_Lowtek 7 hours ago | parent [-]

they just wrote a paragraph about evil being easy, convenient and providing value, how the evilness of others legitimizes their own, how the inability to achieve absolute moral purity means that one small evil deed is indistinguishable from being evil all the time, discredited trying to avoid evil as stupid, claimed that only those who have unachievable moral purity should be allowed to lecture about ethics in favor of good, and literally gave a shout out to hell. I don't think property damage is what we need to worry about. Walk away slowly and do not accept any deals or whatabouts.

zahlman 9 hours ago | parent | prev | next [-]

I can't find the claimed JS in the page source as of now, and also it displays just fine with JS disabled.

armchairhacker 9 hours ago | parent | prev | next [-]

I’d be happy if people stop linking to paywalled sites in the first place. There’s usually a small blog on the same topic and ironically the small blogs poster here are better quality.

But otherwise, without an alternative, the entire thread becomes useless. We’d have even more RTFA, degrading the site even for people who pay for the articles. I much prefer keeping archive.today to that.

edm0nd 10 hours ago | parent | prev [-]

eh, both ArchiveToday and gyrovague are shit humans. Its really just a conflict in between two nerds not "other people".

They need to just hug it out and stop doxing each other lol

zer00eyz 8 hours ago | parent | prev | next [-]

Do I feel bad for the above person.

I do. Deeply.

But having lived through the 80's and 90's, the satanic panic I gotta say this is dangerous ground to tread. If this was a forum user, rather than a LLM, who had done all the same things, and not reached out, it would have been a tragedy but the story would just have been one among many.

The only reason we're talking about this is because anything related to AI gets eyeballs right now. And our youth suicides epidemic outweighs other issues that get lots more attention and money at the moment.

NedF 9 hours ago | parent | prev | next [-]

[dead]

OutOfHere 10 hours ago | parent | prev | next [-]

[flagged]

plorg 8 hours ago | parent [-]

You surely understand that this is not what GP is describing.

optimalsolver 10 hours ago | parent | prev | next [-]

[flagged]

andrewflnr 10 hours ago | parent | next [-]

They're in an impossible situation they created themselves and inflict on the rest of us. Forgive us if we don't shed any tears for them.

bigyabai 10 hours ago | parent [-]

Sure - so is Google Chrome for abetting them with a browser, and Microsoft for not using their Windows spyware to call suicide hotline.

I don't empathize with any of these companies, but I don't trust them to solve mental health either.

sonofhans 9 hours ago | parent [-]

False equivalence; a hammer and a chatbot are not the same. Browsers and operating systems are tools designed to facilitate actions, not to give mental health opinions on free-text inquiries. Once it starts writing suicide notes you don’t get to pretend it’s a hammer anymore.

andrewflnr 7 hours ago | parent [-]

I think the distinction is a bit more subtle than "designed to facilitate actions", which you could argue also applies to an LLM. But a browser is a conduit for ideas from elsewhere or from its user. An LLM... well, kind of breaks the categorization of conduit vs originator, but that's sufficient to show the equivalence is false.

sumeno 10 hours ago | parent | prev | next [-]

The leaders of these LLM companies should be held criminally liable for their products in the same way that regular people would be if they did the same thing. We've got to stop throwing up our hands and shrugging when giant corporations are evil

logicx24 10 hours ago | parent | next [-]

Regular people would not be held liable for this. It would be a dubious case even if a human helped another human to do this.

longfacehorrace 10 hours ago | parent | next [-]

Regular people don't have global reach and influence over humanity's agency, attention, beliefs, politics and economics.

logicx24 8 hours ago | parent [-]

If Donald Trump did this, he wouldn't be criminally liable either.

sumeno 10 hours ago | parent | prev | next [-]

There have absolutely been cases of people being held criminally liable for encouraging someone to commit suicide.

In California it is a felony

> Any person who deliberately aids, advises, or encourages another to commit suicide is guilty of a felony.

https://california.public.law/codes/penal_code_section_401

zahlman 9 hours ago | parent [-]

>>>> helped... write a suicide note.

> encouraging someone to commit suicide.

These are not the same thing. And the evidence from the article is that the bot was anything but encouraging of this plan, up until the end.

sumeno 9 hours ago | parent | next [-]

That's for the jury to decide.

FireBeyond 9 hours ago | parent | prev [-]

Very cherry picked. That would absolutely be "aiding" someone. "I don't want my family to worry about what's happening".

lokar 10 hours ago | parent | prev [-]

A therapist might face major consequences

wiseowise 10 hours ago | parent | prev [-]

Held criminally liable for what, exactly?

wiseowise 10 hours ago | parent | prev [-]

[flagged]

wetpaws 10 hours ago | parent | prev | next [-]

[dead]

wiseowise 10 hours ago | parent | prev | next [-]

[flagged]

fghorow 10 hours ago | parent [-]

May you never need to be in a bereaved parent's shoes.

bigyabai 10 hours ago | parent | next [-]

Many of us aren't, and it's why it's hard to blame the businesses like OpenAI for doing nothing.

The parent's jokey tone is unwarranted, but their overall point is sound. The more blame we assign to inanimate systems like ChatGPT, the more consent we furnish for inhumane surveillance.

wiseowise 10 hours ago | parent | prev [-]

[flagged]

weakfish 9 hours ago | parent [-]

This comment doesn’t belong on this forum, even aside from the horrible lack of empathy

wiseowise 33 minutes ago | parent [-]

Why? Because you can’t guilt trip me into submission I need to be removed? And because I don’t buy media’s blatant abuse of the situation I lack empathy?

10 hours ago | parent | prev [-]
[deleted]