▲ | dale_glass 5 days ago | |||||||
But it's not intended as a watermark, it's an attempt at disruption. And with some models it simply doesn't work. For instance, I've seen somebody experiment with Glaze (the image AI version of this). Glaze at high levels produces visible artifacts (see middle image: https://pbs.twimg.com/media/FrbJ9ZTacAAWQQn.jpg:large ). It seems some models ignore it and produce mostly clean images on the output (looking like the last image), while others just interpret is as a texture, the character is just wearing a funny patterned shirt. This is while the intended result is fooling the model to generate something other than the intended character. | ||||||||
▲ | alpaca128 2 days ago | parent [-] | |||||||
> It seems some models ignore it and produce mostly clean images on the output (looking like the last image), while others just interpret is as a texture This sounds like you’re talking about img2img generation based on a glazed image instead of training, which isn’t the intended purpose. | ||||||||
|