Remix.run Logo
d2049 3 days ago

Reminder that Sam Altman chose to rush the safety process for GPT-4o so that he could launch before Gemini, which then led directly to this teen's suicide:

https://news.ycombinator.com/item?id=45026886

richwater 3 days ago | parent | next [-]

> which then led directly to this teen's suicide

Incredible logic jump with no evidence whatsoever. Thousands of people commit suicide every year without AI.

> ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing

Somehow it's ChatGPT's fault?

Chris2048 3 days ago | parent | next [-]

It's be worse that the bot becomes a nannying presence - either pre-emptively denying anything negative based on the worst-case scenario, or otherwise taking in far more context than it should.

How would a real human (with, let's say, an obligation to be helpful and answer prompts) act any different? Perhaps they would take in more context naturally - but otherwise it's impossible to act any different. Watching GoT could of driven someone to suicide, we don't ban it on that basis - it was the mental illness that killed, not the freedom to feed it.

geephroh 3 days ago | parent | prev [-]

https://www.humanetech.com/podcast/how-openai-s-chatgpt-guid...

https://www.nytimes.com/2025/09/16/podcasts/the-daily/chatgp...

Chris2048 3 days ago | parent [-]

Can you comment on your own opinions, or take-aways from those articles, rather than just link dump?

decremental 3 days ago | parent [-]

[dead]

throwaway98797 3 days ago | parent | prev [-]

build something