Remix.run Logo
novok 6 days ago

You do have empathy for the person who had a tragedy, but it doesn't mean you go into full safetyism / scapegoating that causes significantly less safety and far more harm because of the emotional weight of something in the moment.

It's like making therapists liable for people committing suicide or for people with eating disorders committing suicide indirectly. What ends up happening when you do is therapists avoiding suicidal people like the plague, suicidial people get far less help and more people commit suicide, not less. That is the essense of the harms of safetyism.

You might not think that is real, but I know many therapists via family ties and handling suicdial people is an issue that comes up constantly. Many do try to filter them out because they don't even want to be dragged into a lawsuit that they would win. This is literally reality today.

Doing this with AI will result in kids being banned from AI apps, or forced to have their parents access and read all AI chats. This will drive them into discord groups of teens who egg each other on to commit suicide and now you can't do anything about it, because private communication mediums of just non-profit humans have far more human rights against censorship and teens are amazing at avoiding being supervised. At least with AI models you have a chance to develop something that actually could figured it out for once and solve the moderation balance.

latexr 6 days ago | parent [-]

That is one big slippery slope fallacy. You are inventing motives, outcomes, and future unproven capabilities out of thin air. It’s a made up narrative which does not reflect the state of the world and requires one to buy into a narrow, specific world view.

https://en.wikipedia.org/wiki/Slippery_slope

novok 6 days ago | parent [-]

Instead of just saying “thats not true”, could you actually point out how it is not?

latexr 5 days ago | parent [-]

I initially tried, but your whole comment is one big slippery slope salad so I had to stop or else I’d be commenting every line and that felt absurd.

For example, you’re extrapolating one family making a complaint to a world of “full safetyism / scapegoating”. You also claim it would cause “significantly less safety and far more harm”, which you don’t know. In that same vein you extrapolate into “kids being banned from AI apps” or “forced” (forced!) “to have their parents access and read all AI chats”. Then you go full on into how that will drive them into Discord servers where they’ll “egg each other on to commit suicide” as if that’s the one thing teenagers on Discord do.

And on, and on. I hope it’s clear why I found it pointless to address your specific points. I’m not being figurative when I say I’d have to reproduce your own comment in full.