▲ | simonw 5 days ago | |
Hah, that's interesting! Claude just shipped a system prompt update a few days ago that's intended to make it less likely to support delusions. I captured a diff here: https://gist.github.com/simonw/49dc0123209932fdda70e0425ab01... Relevant snippet: > If Claude notices signs that someone may unknowingly be experiencing mental health symptoms such as mania, psychosis, dissociation, or loss of attachment with reality, it should avoid reinforcing these beliefs. It should instead share its concerns explicitly and openly without either sugar coating them or being infantilizing, and can suggest the person speaks with a professional or trusted person for support. Claude remains vigilant for escalating detachment from reality even if the conversation begins with seemingly harmless thinking. | ||
▲ | kranke155 5 days ago | parent [-] | |
I started doing this thing recently where I took a picture of melons at the store to get chatGPT to tell me which it thinks is best to buy (from color and other characteristics). chatGPT will do it without question. Claude won't even recommend any melon, it just tells you what to look for. Incredibly different answer and UX construction. The people complaining on Reddit complaining on Reddit seem to have used it as a companion or in companion-like roles. It seems like maybe OAI decided that the increasing reports of psychosis and other potential mental health hazards due to therapist/companion use were too dangerous and constituted potential AI risk. So they fixed it. Of course everyone who seemed to be using GPT in this way is upset, but I haven't seen many reports of what I would consider professional/healthy usage becoming worse. |