| |
| ▲ | jack_tripper a day ago | parent [-] | | > It's much easier to bully teenagers and small businesses into questionable settlements. The world's biggest publishing, copyright and IP holders (like Disney, Thomson Reuters, Sony, etc) aren't teenagers or small businesses, and easily have a war chest as big as AI companies, and own about 90% of media IP on which LLMs are trained on, not to mention having lawmakers and artists unions on their side. If they have a case under current laws, they'll take it to court. | | |
| ▲ | marginalia_nu a day ago | parent [-] | | Up until recently, those IP rights holders just needed to show up in order for their opponents to give up and sign a settlement. The difference now is that is no longer enough. | | |
| ▲ | jack_tripper a day ago | parent [-] | | Because of the existing "transformative and fair use" laws are very murky when it comes to LLMs as they obfuscate the theft, versus the legal slam dunk that used to be teenagers ripping CDs to MP3s and sharing them on the internet. Big AI companies are in a legal blindspot of obvious theft. | | |
| ▲ | terminalshort a day ago | parent | next [-] | | It's not murky in the slightest. It's so obviously legal fair use that every judge who has seen it so far has ruled so on summary judgment. That's the legal term for "your case is so bad I'm not even going to allow you to waste more of my time on it." https://www.whitecase.com/insight-alert/two-california-distr... | |
| ▲ | marginalia_nu a day ago | parent | prev [-] | | The risk of taking the AI companies to court too early or without a solid enough case is that if they lose it could create a devastating legal precedent. So the AI boys are permitted to operate in the gray zone, with ambiguous legality while the rights holders bide their time. | | |
| ▲ | terminalshort a day ago | parent [-] | | Nope. Already done: https://www.whitecase.com/insight-alert/two-california-distr... | | |
| ▲ | marginalia_nu a day ago | parent [-] | | That judgement is far from the final nail in the coffin according to the article you just linked. Though it may well be a part of the reason why rights holders are being cautious and presumably gathering evidence for for damages rather than to press on with additional litigation. | | |
| ▲ | terminalshort a day ago | parent [-] | | You don't understand what summary judgment is. Summary judgment means that there is no dispute of the facts of the case. In other words, the AI companies admitted that they did exactly the thing the plaintiffs accused them of doing. The problem, for the plaintiffs, is that the action they accused the AI companies of is, in fact, perfectly legal. There is no amount of evidence to gather that can change this simple fact. Furthermore, damages are irrelevant here because the case was not thrown out because of a lack of damages. It was thrown out because the defendant didn't break the law at all. | | |
| ▲ | marginalia_nu a day ago | parent [-] | | > Even if LLM training is fair use, AI companies face potential liability for unauthorized copying and distribution. The extent of that liability and any damages remain unresolved. - The article you linked | | |
| ▲ | terminalshort a day ago | parent [-] | | No shit. If they violate copyright law they will be punished for it. A statement so obvious that it isn't even worth saying. What has been decided is that training LLMs does not violate copyright law. | | |
| ▲ | marginalia_nu a day ago | parent [-] | | Right, but you've moved the goal post and limited it to only a subset of the things the AI companies are doing that might get them sued. | | |
| ▲ | terminalshort a day ago | parent [-] | | That was the original goalpost. Whether AI training is fair use is a new legal question. Stealing the copyrighted data that you use for training is obviously illegal and nobody has ever claimed that it isn't so it's not even worth discussing and has absolutely nothing to do with AI. |
|
|
|
|
|
|
|
|
|
|
|