▲ | causal 6 days ago | |||||||
It is disturbing, but I think a human therapist would also have told him not to do that, and instead resorted to some other intervention. It is maybe an example of why having a partial therapist is worse than none: it had the training data to know a real therapist wouldn't encourage displaying nooses at home, but did not have the holistic humanity and embodiment needed to intervene appropriately. Edit: I should add that the sycophantic "trust me only"-type responses resemble nothing like appropriate therapy, and are where OpenAI most likely holds responsibility for their model's influence. | ||||||||
▲ | incone123 6 days ago | parent [-] | |||||||
Even here you are anthropomorphising. It doesn't 'know' anything. A human therapist would escalate this to a doctor or even EMS. | ||||||||
|