| ▲ | downbad_ 14 hours ago | |
I don't see how that would work. Isn't it akin to coping an AI generated text and asking different AI models if the texts were generated by AI? They wouldn't be able to tell. But maybe images have a sort of marker. I don't know. | ||
| ▲ | Bender 14 hours ago | parent | next [-] | |
Give it a shot and see. It could be some facet of AI image generation is unique to other processes that it knows about. Try a few of them. Semi-related it was able to spot the fake Ghislaine [1] | ||
| ▲ | vunderba 13 hours ago | parent | prev [-] | |
Some of them do implement a stenographic watermark, but it's a continual game of cat and mouse. It would shock me if even SOTA watermarks persisted if you ran the image through a local model's img2img with a low denoise. | ||