| ▲ | Eisenstein 2 hours ago | ||||||||||||||||
> Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder I don't follow. If the prosecutor can't find evidence of a crime and a person is not charged, that is considered harmful? As such the 5th amendment would fall under the same category and so would encryption. Making law enforcement have to work harder to find evidence of a crime cannot be criminalized unless you can come up with a reason why the actions themselves deserve to be criminalized. > specifically in many of the grok cases it harms young victims that were used as templates for the material. What is the criteria for this? If something is suitably transformed such that the original model for it is not discernable or identifiable, how can it harm them? Do not take these as an argument against the idea you are arguing for, but as rebuttals against arguments that are not convincing, or if they were, would be terrible if applied generally. | |||||||||||||||||
| ▲ | myrmidon an hour ago | parent [-] | ||||||||||||||||
If there is a glut of legal, AI generated CSAM material then this provides a lot of deniability for criminal creators/spreaders that cause genuine harm, and reduces "vigilance" of prosecutors, too ("it's probably just AI generated anyway..."). You could make a multitude of arguments against that perspective, but at least there is a conclusive reason for legal restrictions. > What is the criteria for this? My criteria would be victims suffering personally from the generated material. The "no harm" argument only really applies if victims and their social bubble never find out about the material (but that did happen, sometimes intentionally, in many cases). You could make the same argument that a hidden camera in a locker room never causes any harm as long as it stays undetected; that is not very convincing to me. | |||||||||||||||||
| |||||||||||||||||