Remix.run Logo
yobananaboy 5 days ago

I've been seeing someone on Tiktok that appears to be one of the first public examples of AI psychosis, and after this update to GPT-5, the AI responses were no longer fully feeding into their delusions. (Don't worry, they switched to Claude, which has been far worse!)

simonw 5 days ago | parent | next [-]

Hah, that's interesting! Claude just shipped a system prompt update a few days ago that's intended to make it less likely to support delusions. I captured a diff here: https://gist.github.com/simonw/49dc0123209932fdda70e0425ab01...

Relevant snippet:

> If Claude notices signs that someone may unknowingly be experiencing mental health symptoms such as mania, psychosis, dissociation, or loss of attachment with reality, it should avoid reinforcing these beliefs. It should instead share its concerns explicitly and openly without either sugar coating them or being infantilizing, and can suggest the person speaks with a professional or trusted person for support. Claude remains vigilant for escalating detachment from reality even if the conversation begins with seemingly harmless thinking.

kranke155 5 days ago | parent [-]

I started doing this thing recently where I took a picture of melons at the store to get chatGPT to tell me which it thinks is best to buy (from color and other characteristics).

chatGPT will do it without question. Claude won't even recommend any melon, it just tells you what to look for. Incredibly different answer and UX construction.

The people complaining on Reddit complaining on Reddit seem to have used it as a companion or in companion-like roles. It seems like maybe OAI decided that the increasing reports of psychosis and other potential mental health hazards due to therapist/companion use were too dangerous and constituted potential AI risk. So they fixed it. Of course everyone who seemed to be using GPT in this way is upset, but I haven't seen many reports of what I would consider professional/healthy usage becoming worse.

krapp 5 days ago | parent | prev [-]

AFAIK that trophy goes to Blake Lemoine, who believed Google's LaMDA was sentient[0,1] three years ago, or more recently Geoff Lewis[2,3] who got gaslit into believing in some conspiracy theory incorporating SCP.

IDK what can be done about it. The internet and social media were already leading people into bubbles of hyperreality that got them into believing crazy things. But this is far more potent because of the way it can create an alternate reality using language, plugging it directly into a person's mind in ways that words and pictures on a screen can't even accomplish.

And we're probably not getting rid of AI anytime soon. It's already affected language, culture, society and humanity in deep and profound, and possibly irreversible ways. We've put all of our eggs into the AI basket, and it will suffuse as much of our lives as it can. So we just have to learn to adapt to the consequences.

[0] https://news.ycombinator.com/item?id=31704063

[1]https://www.washingtonpost.com/technology/2022/06/11/google-...

[1] https://futurism.com/openai-investor-chatgpt-mental-health

[2] https://news.ycombinator.com/item?id=44598817