▲ | npteljes 6 days ago | |
Yeah, I was one such person, but I might give up on this ultimately. If I will, I will do so for CYA reasons, not because I think it's a bad thing overall. In this current case, the outcome is horrible, and the answers that ChatGPT provided were inexcusable. But looking at a bigger picture, how much of a better chance does a person have by everyone telling them to "go to therapy" or to "talk to others" and such? What others? Searching "online therapy", BetterHelp is the second result. BetterHelp doesn't exactly have a good reputation online, but still, their influence is widespread. Licensed therapists can also be bad actors. There is no general "good thing" that is tried and true for every particular case of human mental health, but even letting that go, the position is abused just as any other authority / power position is, with many bad therapists out there. Not to mention the other people who pose as (mental) health experts, life coaches, and such. Or the people who recruit for a cult. Frankly, even in the face of this horrible event, I'm not convinced that AI in general fares that much lower than the sum of the people who offer a recipe for a better life, skills, company, camaraderie. Rather, I feel like that AI is in a situation like the self-driving cars are, where we expect the new thing to be 110%, even though we know that the old thing is far for perfect. I do think that OpenAI is liable though, and rightfully so. Their service has a lot of power to influence, clearly outlined in the tragedy that is shown in the article. And so, they also have a lot of responsibility to reign that in. If they were a forum where the teen was pushed to suicide, police could go after the forum participants, moderators, admins. But in case of OpenAI, there is no such person, the service itself is the thing. So the one liable must be the company that provides the service. |