Remix.run Logo
brookst 12 hours ago

I like your formulation but I find point 1 unconvincing. Does it still hold if you paint from a reference image beside the easel? Or projected into the canvas? Or if it’s not a “real” painter but a low-wage laborer? Two of them side by side? A hundred of them?

Where I’m going is I don’t think it makes sense for the moral / legal acceptability of a in image to depend on the mechanical means which created it. I think we have to judge based on the image itself. If the human-generated version and AI-generated version both show the same level of interpretation when viewed, I don’t think point 1 supports treating them differently.

And, as you say, point 2 is mostly congruent, but I have to point out that LLMs are not merely compressed versions of the training material, but instead generalized learnings based on the training data.

ML “neurons” may function differently than our own, and transformer architecture is likely different from the way we think, but the learning of generalized patterns plus details sufficient to reconstitute specific instances seems pretty similar.

Think about painting Indiana Jones; I’ll bet you could paint the handle of the whip in great detail. But it’s likely that’s because you remember a specific image of his whip handle; it’s because you know what whip handles look like in general. ML models work similarly (at some level of abstraction).

I’m left unconvinced that there is anything substantially different about human and AI generated art, and also that we can only judge IP position of either based on the work itself.