▲ | novok 6 days ago | ||||||||||||||||
You do have empathy for the person who had a tragedy, but it doesn't mean you go into full safetyism / scapegoating that causes significantly less safety and far more harm because of the emotional weight of something in the moment. It's like making therapists liable for people committing suicide or for people with eating disorders committing suicide indirectly. What ends up happening when you do is therapists avoiding suicidal people like the plague, suicidial people get far less help and more people commit suicide, not less. That is the essense of the harms of safetyism. You might not think that is real, but I know many therapists via family ties and handling suicdial people is an issue that comes up constantly. Many do try to filter them out because they don't even want to be dragged into a lawsuit that they would win. This is literally reality today. Doing this with AI will result in kids being banned from AI apps, or forced to have their parents access and read all AI chats. This will drive them into discord groups of teens who egg each other on to commit suicide and now you can't do anything about it, because private communication mediums of just non-profit humans have far more human rights against censorship and teens are amazing at avoiding being supervised. At least with AI models you have a chance to develop something that actually could figured it out for once and solve the moderation balance. | |||||||||||||||||
▲ | latexr 6 days ago | parent [-] | ||||||||||||||||
That is one big slippery slope fallacy. You are inventing motives, outcomes, and future unproven capabilities out of thin air. It’s a made up narrative which does not reflect the state of the world and requires one to buy into a narrow, specific world view. | |||||||||||||||||
|