Remix.run Logo
masfuerte a day ago

I don't really care.

Either enforce the current copyright regime and sue the AI companies to dust.

Or abolish copyright and let us all go hog wild.

But this halfway house where you can ignore the law as long as you've got enough money is disgusting.

dragonwriter a day ago | parent | next [-]

Or treat AI training as within the coverage of the current fair use regime (which is certainly defensible within the current copyright regime), while prosecuting the use of AI models to create infringing copies and derivative works that do not themselves have permission or a reasonable claim to be within the scope of fair use as a violation (and prosecuted hosted AI firms for contributory infringement where their actions with regard to such created infringements fit the existing law on that.)

Wowfunhappy a day ago | parent | next [-]

^ I feel like I almost never see this take, and I don't understand why because frankly, it strikes me at patently obvious! Of course the tool isn't responsible, and the person who uses it is.

srveale a day ago | parent | next [-]

I think the tricky bit is that AI companies make money off the collected works of artists, regardless of user behaviour. Suppose I pay for an image generator because I like making funny pictures in Ghibli style, then the AI company makes money because of Ghibli's work. Is that ethical? I can see how an artist would get upset about it.

On the other hand, suppose I also like playing guitar covers of songs. Does that mean artists should get upset at the guitar company? Does it matter if I do it at home or at a paid gig? If I record it, do I have to give credit to the original creator? What if I write a song with a similar style to an existing song? These are all questions that have (mostly) well defined laws and ethical norms, which usually lean towards what you said - the tool isn't responsible.

Maybe not a perfect analogy. It takes more skill to play guitar than to type "Funny meme Ghibli style pls". Me playing a cover doesn't reduce demand for actual bands. And guitar companies aren't trying to... take over the world?

At the end of the day, the cat is out of the bag, generative AI is here to stay, and I think I agree that we're better off regulating use rather than prohibition. But considering the broader societal impacts, I think AI is more complicated of a "tool" than other kinds of tools for making art.

codedokode a day ago | parent [-]

> I think the tricky bit is that AI companies make money off the collected works of artists,

There is also a chance that AI companies didn't obtain the training data legally; in that case it would be at least immoral to build a business on stolen content.

PlunderBunny a day ago | parent | prev [-]

This is similar to (but not the same) as the famous VCR case [0] that allowed home taping of TV shows.

[0] https://en.wikipedia.org/wiki/Sony_Corp._of_America_v._Unive....

prawn a day ago | parent | prev [-]

I see AI training on public material like I would upcoming artists being inspired by the artists before them. Obviously the scale is very different. I don't mind your scenario because an AI firm, if they couldn't stay on top of what their model was creating, could voluntarily reduce the material used to train it.

codedokode a day ago | parent [-]

You imply that AI model is creating new works and not merely rearranging pieces from other works you never saw and therefore might consider novel. AI model is not a model of a creative human currently: a human doesn't need to listen to million songs to create his own.

ryandamm a day ago | parent | prev | next [-]

This may not be a particularly popular opinion, but current copyright laws in the US are pretty clearly in favor of training an AI as a transformative act, and covered by fair use. (I did confirm this belief in conversation with an IP attorney earlier this week, by the way, though I myself am not a lawyer.)

The best-positioned lawsuits to win, like NYTimes vs. OpenAI/MS, is actually based on violating terms of use, rather than infringing at training time.

Emitting works that violate copyright is certainly possible, but you could argue that the additional entropy required to pass into the model (the text prompt, or the random seed in a diffusion model) is necessary for the infringement. Regardless, the current law would suggest that the infringing action happens at inference time, not training.

I'm not making a claim that the copyright should work that way, merely that it does today.

codedokode a day ago | parent | next [-]

> Regardless, the current law would suggest that the infringing action happens at inference time, not training.

Zuckerberg downloading a large library of pirated articles does not violate any laws? I think you can get a life sentence for merely posting links to the library.

philipkglass a day ago | parent [-]

I think you can get a life sentence for merely posting links to the library.

This isn't true in the United States. I would be surprised if it were true in any country. Many people have posted sci-hub links here, and to my knowledge nobody has ever suffered legal problems from it:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

codedokode a day ago | parent [-]

Doesn't it count as distribution? I thought DMCA requires to delete links.

philipkglass 8 hours ago | parent [-]

A copyright holder may file a takedown notice [1] against a platform that hosts a link to copyright-infringing material like a book from Library Genesis or an article from sci-hub. Failure to act upon a legitimate takedown notice opens the platform operator up to a civil law suit. The platform does not have to take proactive measures to prevent infringing links from being posted by users. Some platforms like YouTube take more aggressive measures to proactively guard against infringement, but they are not required by the provisions of the DMCA.

[1] https://guides.dml.georgetown.edu/c.php?g=904530&p=6510951 (See "Notifications of Claimed Infringement")

photonthug a day ago | parent | prev | next [-]

> The best-positioned lawsuits to win, like NYTimes vs. OpenAI/MS, is actually based on violating terms of use, rather than infringing at training time.

I agree with this, but it's worth noting this does not conflict with and kind of reinforces the GP's comment about hypocrisy and "[ignoring] the law as long as you've got enough money".

The terms of use angle is better than copyright, but most likely we'll never see any precedent created that allows this argument to succeed on a large scale. If it were allowed then every ToS would simply begin to say Humans Only, Robots not Welcome or if you're a newspaper then "reading this you agree that you're a human or a search engine but will never use content for generative AI". If github could enforce site terms and conditions like that, then they could prevent everyone else from scraping regardless of individual repository software licenses, etc.

While the courts are setting up precedent for this kind of thing, they will be pressured to maintain a situation where terms and conditions are useful for corporations to punish people. Meanwhile, corporations won't be able to punish corporations for the most part, regardless of the difference in size. But larger corporations can ignore whatever rules they want, to the possible detriment of smaller ones. All of which is more or less status quo

o11c a day ago | parent | prev [-]

Training alone, perhaps. But the way the AIs are actually used (regardless of prompt engineering) is a direct example of what is forbidden by the case that introduced the "transformative" language.

> if [someone] thus cites the most important parts of the work, with a view, not to criticize, but to supersede the use of the original work, and substitute the review for it, such a use will be deemed in law a piracy.

Of course, we live in a post-precedent world, so who knows?

mlsu a day ago | parent | prev [-]

The hypocrisy is obviously disgusting.

It also shows how, at the end of the day, none of the justifications for this intellectual property crap are about creativity, preserving the rights of creators, or any lofty notion that intellectual property actually makes the world a better place, but rather, it is a naked power+money thing. Warner Bros and Sony can stop you from publishing a jpeg because they have lawyers who write the rulebook. Sam Altman can publish a jpeg because the Prince of Saud believes that he is going build for corporate America a Golem that can read excel spreadsheets.