▲ | bartread 3 days ago | |||||||
> Any halfway decent therapist will spot these behaviors and at least not encourage them. LLM therapists seem to spot these behaviors and give the user what they want to hear. FWIW I agree with you but, to some extent, I think some portion of people who want to engage in "disingenous" therapy with an LLM will also do the same with a human, and won't derive benefit from therapy as a result. I've literally seen this in the lives of some people I've known, one very close. It's impossible to break the cycle without good faith engagement, and bad faith engagement is just as possible with humans as it is with robots. | ||||||||
▲ | tempestn 3 days ago | parent [-] | |||||||
Yes, except generally the worst case there will be that they don't see any benefit, as you said. With an AI it can be quite a bit worse than that, if it starts reinforcing harmful beliefs or tendencies. | ||||||||
|