Remix.run Logo
planb 17 hours ago

> Well, precisely. What then is the AI company's justification for charging money to paint a picture of Harrison Ford to its users?

Formulated this way, I see your point. I see the LLM as a tool, just like photoshop. From a legal standpoint, I even think you're right. But from a moral standpoint, my feeling is that it should even be okay for an artist to sell painted pictures of Harrison Ford. But not to sell the same image as posters on ebay. And now my argument falls apart. Thanks for leading my thoughts in this direction...

noduerme 17 hours ago | parent | next [-]

You raise a really amazing point! One that should get more attention in these discussions on HN! I'm a painter in my spare time. I think it is okay to sit down and paint a picture of Harrison Ford (on velvet, maybe), and sell it on Etsy or something if you want to. Before you accuse me of hypocrisy, let me stipulate: Either way, it would not be ok for someone to buy that painting and use it in an ad campaign that insinuated that their soap had been endorsed by Harrison Ford. As an art director, it has obviously never been okay to ask someone to paint Harrison Ford and use that picture in a soap ad. I go through all kinds of hoops and do tons of checking on my artists' work to make sure that it doesn't violate anyone else's IP, let alone anyone's human likeness.

But that's all known. My argument for why me selling that painting is okay, and why an AI company with a neural network doing the same thing and selling it would not be okay, is a lot more subtle and goes to a question that I think has not been addressed properly: What's the difference between my neurons seeing a picture of Harrison Ford, and painting it, and artificial neurons owned by a company doing the same thing? What if I traced a photo of Ford and painted it, versus doing his face from memory?

(As a side note, my friend in art school had an obsession with Jewel, the singer. He painted her dozens of times from memory. He was not an AI, just a really sweet guy).

To answer why I think it's ok to paint Jewel or Ford, and sell your painting, I kind of have to fall back on three ideas:

(1) Interpretation: You are not selling a picture of them, you're selling your personal take on your experience of them. My experience of watching Indiana Jones movies as a kid and then making a painting is not the same thing as holding a compressed JPEG file in my head, to the degree that my own cognitive experience has significantly changed my perceptions in ways that will come out in the final artwork, enough to allow for whatever I paint to be based on some kind of personal evolution. The item for sale is not a picture of Harrison Ford, it's my feelings about Harrison Ford.

(2) Human-centrism: That my neurons are not 1:1 copies of everything I've witnessed. Human brains aren't simply compression algorithms the way LLMs or diffusers are. AI doesn't bring cognitive experience to its replication of art, and if it seems to do so, we have to ask whether that isn't just a simulacrum of multiple styles it stole from other places laid over the art it's being asked to produce. There's an anti-human argument to be made that we do the exact same thing when we paint Indiana Jones after being exposed to Picasso. But here's a thought: we are not a model. Or rather, each of us is a model. Buying my picture of Indiana Jones is a lot like buying my model and a lot less like buying a platonic picture of Harrison Ford.

(3) Tools, as you brought up. The more primitive the tools used, the more difficult we can prove it to be to truly copy something. It takes a year to make 4 seconds of animation, it takes an AI no time at all to copy it... one can prove by some function of work times effort that an artwork is, at least, a product of one's own if not completely original.

I'm throwing these things out here as a bit of a challenge to the HN community, because I think these are attributes that have been under-discussed in terms of the difference between AI-generated artwork and human art (and possibly a starting point for a human-centric way of understanding the difference).

I'm really glad you made me think about this and raised the point!

[edit] Upon re-reading, I think points 1 and 2 are mostly congruent. Thanks for your patience.

brookst 12 hours ago | parent | next [-]

I like your formulation but I find point 1 unconvincing. Does it still hold if you paint from a reference image beside the easel? Or projected into the canvas? Or if it’s not a “real” painter but a low-wage laborer? Two of them side by side? A hundred of them?

Where I’m going is I don’t think it makes sense for the moral / legal acceptability of a in image to depend on the mechanical means which created it. I think we have to judge based on the image itself. If the human-generated version and AI-generated version both show the same level of interpretation when viewed, I don’t think point 1 supports treating them differently.

And, as you say, point 2 is mostly congruent, but I have to point out that LLMs are not merely compressed versions of the training material, but instead generalized learnings based on the training data.

ML “neurons” may function differently than our own, and transformer architecture is likely different from the way we think, but the learning of generalized patterns plus details sufficient to reconstitute specific instances seems pretty similar.

Think about painting Indiana Jones; I’ll bet you could paint the handle of the whip in great detail. But it’s likely that’s because you remember a specific image of his whip handle; it’s because you know what whip handles look like in general. ML models work similarly (at some level of abstraction).

I’m left unconvinced that there is anything substantially different about human and AI generated art, and also that we can only judge IP position of either based on the work itself.

planb 16 hours ago | parent | prev [-]

Thank you too, this discussion really helped me in getting a more nuanced view on this whole topic. I still think OpenAI should be allowed to generate these kind of images, but just from of a selfish "I want to use this to generate labels for my (uncommercial) home brew beers"-perspective. I surely better understand the counterpoints now.

noduerme 16 hours ago | parent [-]

I think it's an amazing tool as a starting point or a way to get ideas. Our small ad agency's policy has always been to research the hell out of something... like, if you're asked to do a logo for someone running for state Senate, go read the history of the state senate since 1846, and look up all the things everyone used, and start brainstorming art ideas that have multiple layers of meaning that work with your candidate's message. But AI makes it super easy to get a nice looking starting point and then use your ideas to iterate on top of that.

I'm a bit of a home moonshiner, too, so love that you're coming up with labels and using these tools to help out! If I could offer one piece of advice, whether for writing prompts or making your own final art, it would be: History is so rich with visual ideas you can riff from. The history of beer and wine bottles itself is unbelievable. If aliens came here a thousand years after we're gone, and all that was left were liquor labels, they could understand most of our culture. The LLMs always go to the most obvious thing, unless you tell them specifically otherwise. Use the tool but also get funky and mix up the ideas you love the most, adding your own flavor. Just like being a brewer or a chef. That's the essence of being an artist, and making something that at the end of the day is unique and new. Love it. Send me a beer please.

16 hours ago | parent | prev [-]
[deleted]