▲ | ashleyn 6 hours ago | |
This is where my technical knowledge of genAI breaks down, but wouldn't an image generator be unable to produce such imagery unless honest-to-god CSAM were used in the training of it? | ||
▲ | gs17 4 hours ago | parent | next [-] | |
It's like the early demo for DALL-E where you could get "an armchair in the shape of an avocado", which presumably wasn't in the training set, but enough was in it to generalize the "armchair" and "avocado" concepts and combine them. | ||
▲ | 6ix8igth 6 hours ago | parent | prev [-] | |
It's possible for the model to take disparate concepts and put them together. E.g. you can train a LORA to teach stable diffusion what a cowboy hat it is, then ask for Dracula in a cowboy hat.that probably doesn't exist in it's training data, but it will give it to you just fine. I'm not about to try, but I would assume the same would apply for child pornography. |