▲ | ben_w 5 days ago | ||||||||||||||||
To the extent that the people making the models feel unburdened by the data being explicitly watermarked "don't use me", you are correct. Seems like an awful risk to deliberately strip such markings. It's a kind of DRM, and breaking DRM is illegal in many countries. | |||||||||||||||||
▲ | dale_glass 5 days ago | parent [-] | ||||||||||||||||
But it's not intended as a watermark, it's an attempt at disruption. And with some models it simply doesn't work. For instance, I've seen somebody experiment with Glaze (the image AI version of this). Glaze at high levels produces visible artifacts (see middle image: https://pbs.twimg.com/media/FrbJ9ZTacAAWQQn.jpg:large ). It seems some models ignore it and produce mostly clean images on the output (looking like the last image), while others just interpret is as a texture, the character is just wearing a funny patterned shirt. This is while the intended result is fooling the model to generate something other than the intended character. | |||||||||||||||||
|