| ▲ | cakealert 6 days ago |
| Would it be any different if it was an offline model? When someone uses a tool and surrenders their decision making power to the tool, shouldn't they be the ones solely responsible? The liability culture only gives lawyers more money and depresses innovation. Responsibility is a thing. |
|
| ▲ | kelnos 6 days ago | parent | next [-] |
| On one hand I agree with you on the extreme litigiousness of (American?) culture, but on the other, certain people have a legal duty to report when it comes to minors who voice suicidal thoughts. Currently that's only professionals like therapists, teachers, school counselors, etc. But what does an LLM chatbot count as in these situations? The kid was using ChatGPT as a sort of therapist, even if that's generally not a good idea. And if it weren't for ChatGPT, would this kid have instead talked to someone who would have ensured that he got the help he needed? Maybe not. But we have to consider the possibility. I think it's really, really blurry. I think the mom's reaction of "ChatGPT killed my son" is ridiculous: no, your son killed himself. ChatGPT facilitated it, based on questions it was asked by your son, but your son did it. And it sounds like he even tried to get a reaction out of you by "showing" you the rope marks on his neck, but you didn't pay attention. I bet you feel guilty about that. I would too, in your position. But foisting your responsibility onto a computer program is not the way to deal with it. (Not placing blame here; everybody misses things, and no one is "on" 100% of the time.) > Responsibility is a thing. Does OpenAI (etc.) have a responsibility to reduce the risk of people using their products in ways like this? Legally, maybe not, but I would argue that they absolutely have a moral and ethical responsibility to do so. Hell, this was pretty basic ethics taught in my engineering classes from 25 years ago. Based on the chat excerpts NYT reprinted, it seems like these conversations should have tripped a system prompt that either cut off the conversations entirely, or notified someone that something was very, very wrong. |
| |
| ▲ | cakealert 6 days ago | parent | next [-] | | [flagged] | | |
| ▲ | hackit2 6 days ago | parent | next [-] | | Sad to see what happened to the kid, but to point the finger at a language model is just laughable. It shows a complete breakdown of society and the caregivers entrusted with responsibility. | | |
| ▲ | GuinansEyebrows 6 days ago | parent [-] | | people are (rightly) pointing the finger at OpenAI, the organization comprised of human beings, all of whom made decisions along the way to release a language model that encouraged a child to attempt and complete suicide. |
| |
| ▲ | nbngeorcjhe 6 days ago | parent | prev [-] | | [flagged] | | |
| ▲ | hackit2 6 days ago | parent [-] | | [flagged] | | |
| ▲ | gabriel666smith 6 days ago | parent | next [-] | | >You cannot be empathetic to complete strangers. Why not? I’m not trying to inflame this further, I’m genuinely interested in your logic for this statement. | | |
| ▲ | hackit2 6 days ago | parent [-] | | In high social cohesion there is social pressure to adhere to reciprocation, how-ever this start breaking down above a certain human group size. Empathy like all emotions require effort and cognitive load, and without things being mutual you will eventually slowly become drained, bitter and resentful because of empathy fatigue. To prevent emotional exhaustion and conserve energy, a person's empathy is like a sliding scale that is constantly adjusted based on the closeness of their relationship with others. | | |
| ▲ | gabriel666smith 6 days ago | parent [-] | | Thank you for your good-faith explanation. > Empathy like all emotions require effort and cognitive load, and without things being mutual you will eventually slowly become drained, bitter and resentful because of empathy fatigue. Do you have a source study or is this anecdotal, or speculative? Again, genuinely interested, as it’s a claim I see often, but haven’t been able to pin down. (While attempting not to virtue-signal) I personally find it easier to empathize with people I don’t know, often, which is why I’m interested. I don’t expect mutual empathy from someone who doesn’t know who I am. Equally, I try not to consume much news media, as the ‘drain’ I experience feels as though it comes from a place of empathy when I see sad things. So I think I experience a version of what you’re suggesting, and I’m interested in why our language is quite oppositional despite this. |
|
| |
| ▲ | latexr 6 days ago | parent | prev [-] | | > You cannot be empathetic to complete strangers. Of course you can, and it’s genuinely worrying you so vehemently believe you can’t. That’s what support groups are—strangers in similar circumstances being empathetic to each other to get through a hurtful situation. “I told you once that I was searching for the nature of evil. I think I’ve come close to defining it: a lack of empathy. It’s the one characteristic that connects all the defendants. A genuine incapacity to feel with their fellow man. Evil, I think, is the absence of empathy.” — Gustave Gilbert, author of “Nuremberg Diary”, an account of interviews conducted during the Nuremberg trials of high-ranking Nazi leaders. |
|
|
| |
| ▲ | latexr 6 days ago | parent | prev [-] | | > I think the mom's reaction of "ChatGPT killed my son" is ridiculous: no, your son killed himself. ChatGPT facilitated it (…) That whole paragraph is quite something. I wonder what you’d do if you were given the opportunity to repeat those words in front of the parents. I suspect (and hope) some empathy might kick in and you’d realise the pedantry and shilling for the billion dollar company selling a statistical word generator as if it were a god isn’t the response society needs. Your post read like the real-life version of that dark humour joke: > Actually, the past tense is “hanged”, as in “he hanged himself”. Sorry about your Dad, though. | | |
| ▲ | novok 6 days ago | parent [-] | | You do have empathy for the person who had a tragedy, but it doesn't mean you go into full safetyism / scapegoating that causes significantly less safety and far more harm because of the emotional weight of something in the moment. It's like making therapists liable for people committing suicide or for people with eating disorders committing suicide indirectly. What ends up happening when you do is therapists avoiding suicidal people like the plague, suicidial people get far less help and more people commit suicide, not less. That is the essense of the harms of safetyism. You might not think that is real, but I know many therapists via family ties and handling suicdial people is an issue that comes up constantly. Many do try to filter them out because they don't even want to be dragged into a lawsuit that they would win. This is literally reality today. Doing this with AI will result in kids being banned from AI apps, or forced to have their parents access and read all AI chats. This will drive them into discord groups of teens who egg each other on to commit suicide and now you can't do anything about it, because private communication mediums of just non-profit humans have far more human rights against censorship and teens are amazing at avoiding being supervised. At least with AI models you have a chance to develop something that actually could figured it out for once and solve the moderation balance. | | |
| ▲ | latexr 6 days ago | parent [-] | | That is one big slippery slope fallacy. You are inventing motives, outcomes, and future unproven capabilities out of thin air. It’s a made up narrative which does not reflect the state of the world and requires one to buy into a narrow, specific world view. https://en.wikipedia.org/wiki/Slippery_slope | | |
| ▲ | novok 6 days ago | parent [-] | | Instead of just saying “thats not true”, could you actually point out how it is not? | | |
| ▲ | latexr 5 days ago | parent [-] | | I initially tried, but your whole comment is one big slippery slope salad so I had to stop or else I’d be commenting every line and that felt absurd. For example, you’re extrapolating one family making a complaint to a world of “full safetyism / scapegoating”. You also claim it would cause “significantly less safety and far more harm”, which you don’t know. In that same vein you extrapolate into “kids being banned from AI apps” or “forced” (forced!) “to have their parents access and read all AI chats”. Then you go full on into how that will drive them into Discord servers where they’ll “egg each other on to commit suicide” as if that’s the one thing teenagers on Discord do. And on, and on. I hope it’s clear why I found it pointless to address your specific points. I’m not being figurative when I say I’d have to reproduce your own comment in full. |
|
|
|
|
|
|
| ▲ | incone123 6 days ago | parent | prev | next [-] |
| That argument makes sense for a mentally capable person choosing not to use eye protection while operating a chainsaw but it's much less clear that a person who is by definition mentally ill is capable of making such an informed choice. |
| |
| ▲ | cakealert 6 days ago | parent [-] | | Such a person should not be interacting with an LLM then. And failure to abide by this rule is either the fault of his caregivers, his own or no one's. |
|
|
| ▲ | lm28469 6 days ago | parent | prev [-] |
| > Responsibility is a thing. Well yeah, it's also a thing for companies/execs no ? Remember they're paid so much because they take __all__ the responsibilities, or that's what they say at least |