Remix.run Logo
danaris 3 hours ago

If it's AI-generated, it is fundamentally not CSAM.

The reason we shifted to the terminology "CSAM", away from "child pornography", is specifically to indicate that it is Child Sexual Abuse Material: that is, an actual child was sexually abused to make it.

You can call it child porn if you really want, but do not call something that never involved the abuse of a real, living, flesh-and-blood child "CSAM". (Or "CSEM"—"Exploitation" rather than "Abuse"—which is used in some circles.) This includes drawings, CG animations, written descriptions, videos where such acts are simulated with a consenting (or, tbh, non consenting—it can be horrific, illegal, and unquestionably sexual assault without being CSAM) adult, as well as anything AI-generated.

These kinds of distinctions in terminology are important, and yes I will die on this hill.

yellowapple a minute ago | parent | next [-]

I think the one case where I'd disagree is when it's a depiction of an actual person - say, someone creates pornography (be it AI-generated, drawn, CG-animated, etc.) depicting a person who actually exists in the real world, and not just some invented character. That's certainly a case where it'd cross into actual CSAM/CSEM, because despite the child not physically being abused/exploited in the way depicted in the work, such a defamatory use of the child's likeness would constitute psychological abuse/exploitation.

ashleyn 3 hours ago | parent | prev [-]

This is where my technical knowledge of genAI breaks down, but wouldn't an image generator be unable to produce such imagery unless honest-to-god CSAM were used in the training of it?

gs17 an hour ago | parent | next [-]

It's like the early demo for DALL-E where you could get "an armchair in the shape of an avocado", which presumably wasn't in the training set, but enough was in it to generalize the "armchair" and "avocado" concepts and combine them.

6ix8igth 3 hours ago | parent | prev [-]

It's possible for the model to take disparate concepts and put them together. E.g. you can train a LORA to teach stable diffusion what a cowboy hat it is, then ask for Dracula in a cowboy hat.that probably doesn't exist in it's training data, but it will give it to you just fine. I'm not about to try, but I would assume the same would apply for child pornography.