Remix.run Logo
zoeysmithe 4 days ago

I was just reading about a suicide tied to AI chatbot 'therapy' uses.

This stuff is a nightmare scenario for the vulnerable.

vessenes 4 days ago | parent | next [-]

If you want to feel worried, check the Altman AMA on reddit. A lottttt of people have a parasocial relationship with 4o. Not encouraging.

codedokode 4 days ago | parent [-]

Why OpenAI doesn't block the chatbot from participating in such conversations?

robotnikman 4 days ago | parent | next [-]

Probably because there is a massive demand for it, no doubt powered by the loneliness a lot of people report feeling.

Even if OpenAI blocks it, other AI providers will have no problem with doing so

jacobsenscott 4 days ago | parent | prev | next [-]

Because the information people dump into their "ai therapist" is holy grail data for advertisers.

lm28469 4 days ago | parent | prev [-]

Why would they?

codedokode 4 days ago | parent [-]

To prevent from something bad happening?

ipaddr 4 days ago | parent [-]

But that allow prevents the good.

at-fates-hands 4 days ago | parent | prev | next [-]

Its already a nightmare:

From June of this year: https://gizmodo.com/chatgpt-tells-users-to-alert-the-media-t...

Another person, a 42-year-old named Eugene, told the Times that ChatGPT slowly started to pull him from his reality by convincing him that the world he was living in was some sort of Matrix-like simulation and that he was destined to break the world out of it. The chatbot reportedly told Eugene to stop taking his anti-anxiety medication and to start taking ketamine as a “temporary pattern liberator.” It also told him to stop talking to his friends and family. When Eugene asked ChatGPT if he could fly if he jumped off a 19-story building, the chatbot told him that he could if he “truly, wholly believed” it.

In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.

A recent study found that chatbots designed to maximize engagement end up creating “a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies.” The machine is incentivized to keep people talking and responding, even if that means leading them into a completely false sense of reality filled with misinformation and encouraging antisocial behavior.

sys32768 4 days ago | parent | prev | next [-]

This happens to real therapists too.

PeterCorless 4 days ago | parent | prev | next [-]

Endless AI nightmare fuel.

https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in...

lupire 4 days ago | parent | prev | next [-]

Please cite your source.

I found this one: https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-a...

When someone is suicidal, anything in their life can be tied to suicide.

In the linked case, the suffering teen was talking to a chatbot model of a fictional character from a book that was "in love" with him (and a 2024 model that basically just parrots back whatever the user says with a loving spin), so it's quite a stretch to claim that the AI was encouraging a suicide, in contrast to a situation where someone was persuaded to try to meet a dead person in an afterlife, or bullied to kill themself.

cindyllm 4 days ago | parent | prev [-]

[dead]