| ▲ | myrmidon 4 hours ago | |||||||||||||||||||||||||
There are multiple valid reasons to fight realistic computer-generated CSAM content. Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder, prosecution of perpetrators more difficult and specifically in many of the grok cases it harms young victims that were used as templates for the material. Content is unacceptable if its proliferation causes sufficient harm, and this is arguably the case here. | ||||||||||||||||||||||||||
| ▲ | Eisenstein 2 hours ago | parent [-] | |||||||||||||||||||||||||
> Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder I don't follow. If the prosecutor can't find evidence of a crime and a person is not charged, that is considered harmful? As such the 5th amendment would fall under the same category and so would encryption. Making law enforcement have to work harder to find evidence of a crime cannot be criminalized unless you can come up with a reason why the actions themselves deserve to be criminalized. > specifically in many of the grok cases it harms young victims that were used as templates for the material. What is the criteria for this? If something is suitably transformed such that the original model for it is not discernable or identifiable, how can it harm them? Do not take these as an argument against the idea you are arguing for, but as rebuttals against arguments that are not convincing, or if they were, would be terrible if applied generally. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||