| ▲ | arcfour 3 hours ago |
| You're perfectly free to scrape the web yourself and train your own model. You're not free to let Anthropic do that work for you, because they don't want you to, because it cost them a lot of time and money and secret sauce presumably filtering it for quality and other stuff. Stole? Courts have ruled it's transformative, and it very obviously is. AI doomerism is exhausting, and I don't even use AI that much, it's just annoying to see people who want to find any reason they can to moan. |
|
| ▲ | petcat 3 hours ago | parent | next [-] |
| > Stole? Courts have ruled it's transformative, and it very obviously is. The courts have ruled that AI outputs are not copyrightable. The courts have also ruled that scraping by itself is not illegal, only maybe against a Terms of Service. Therefore, Anthropic, OpenAI, Google, etc. have no legal claim to any proprietary protections of their model outputs. So we have two things that are true: 1) Anthropic (certainly) violated numerous TOS by scraping all of the internet, not just public content. 2) Scraping Anthropic's model outputs is no different than what Anthropic already did. Only a TOS violation. |
| |
| ▲ | dpark an hour ago | parent | next [-] | | > 2) Scraping Anthropic's model outputs is no different than what Anthropic already did. Only a TOS violation. Regardless of whether LLM training amounts to theft, thieves are still allowed to put locks on their own doors. | |
| ▲ | gruez an hour ago | parent | prev [-] | | >The courts have ruled that AI outputs are not copyrightable. "not copyrightable" doesn't imply they can't frustrate attempts to scrape data. | | |
| ▲ | petcat an hour ago | parent [-] | | Nobody is saying they can't try to stop you themselves. That's where the Terms of Service violation part comes in. They can cancel your account, block your IP, etc. They just can't legally stop you by, for instance, compelling a judge to order you to stop. | | |
| ▲ | dpark 43 minutes ago | parent [-] | | > They just can't legally stop you by, for instance, compelling a judge to order you to stop. They probably can, actually. TOS are legally binding. More likely they would block you rather than pursuing legal avenues but they certainly could. | | |
| ▲ | petcat 23 minutes ago | parent [-] | | The Supreme Court already ruled on this. Scraping public data, or data that you are authorized to access, is not a violation of the Computer Fraud and Abuse Act. Now, if you try to get around attempts to block your access, then yes you could be in legal trouble. But that's not what is happening here. These are people/companies that have Claude accounts in good standing and are authorized by Anthropic to access the data. Nobody is saying that Anthropic can't just block them though, and they are certainly trying. | | |
| ▲ | dpark 8 minutes ago | parent [-] | | I didn’t say anything about the computer fraud and abuse act. TOS are legally binding contracts in their own right if implemented correctly. |
|
|
|
|
|
|
| ▲ | alpha_squared 2 hours ago | parent | prev | next [-] |
| > You're perfectly free to scrape the web yourself and train your own model. Actually, not anymore as a result of OpenAI and Anthropic's scraping. For example, Reddit came down hard on access to their APIs as a response to ChatGPT's release and the news that LLMs were built atop of scraping the open web. Most of the web today is not as open as before as a result of scraping for LLM data. So, no, no one is perfectly free to scrape the web anymore because open access is dying. |
|
| ▲ | two_tasty 2 hours ago | parent | prev | next [-] |
| "...free to scrape the web yourself and train your own model." Yes, rich and poor are equally forbidden from sleeping under bridges. |
| |
| ▲ | kspacewalk2 2 hours ago | parent [-] | | Meaning what? The poor gets to sleep in the guest room of the rich guy's house because muh inequality? Anthropic paid a lot of money for a moat and want to guard it. It is not wrong, in any sense of the word, for them to do so. | | |
| ▲ | salawat an hour ago | parent [-] | | Rich people aren't going to find themselves needing to sleep under a bridge, so the law really only exists as a constraint on the poor. Duh. The flex that "well a rich guy couldn't do it either" is A) at best a myopic misunderstanding perpetuated by out of touch people and B) hopelessly naive, because anny punishment for the rich guy actually sleeping under a bridge is so laughably small it may as well not even exist. Hence, the whole bit of "a legal system to keep these accountable, but not for me". | | |
| ▲ | kspacewalk2 9 minutes ago | parent | next [-] | | Okay, you explained what Anatole France meant, which is probably helpful for those few who didn't get it from the quote itself. Perhaps now you can explain what on earth this has to do with Anthropic not wanting to let other for-profit businesses mooch off its investment of time, brainpower and money? | |
| ▲ | dpark an hour ago | parent | prev [-] | | You explained what “rich and poor are equally forbidden from sleeping under bridges” means, but not what this has to do with the statement that one is free to do their own scraping and training, which I’m pretty sure is what kspacewalk was asking. |
|
|
|
|
| ▲ | jtbayly 3 hours ago | parent | prev | next [-] |
| Wut?They did exactly the same thing! Try this: If you want to train a model, you’re free to write your own books and websites to feed into it. You’re not free to let others do that work for you because they don’t want you to, because it cost them a lot of time and money and secret sauce presumably filtering it for quality and other stuff. |
| |
| ▲ | arcfour 2 hours ago | parent [-] | | I don't really care, honestly. If you want to keep your knowledge secret, don't publish it publicly. The model doesn't output your work directly and pass it off as original. It outputs something completely different. So I don't see why I should care. | | |
| ▲ | buzzerbetrayed 2 hours ago | parent [-] | | Lmfao. Your own words turned against you and suddenly you “don’t really care”. | | |
| ▲ | jollymonATX an hour ago | parent [-] | | Yeah these folks skin is often very thin. One poke too hard and it's "whatever" and them scuttling off. Really hope there is a day they introspect. |
|
|
|
|
| ▲ | airstrike 2 hours ago | parent | prev | next [-] |
| Guess who else spent a lot of time and money and secret sauce? Do you hear the words coming out of your mouth? |
|
| ▲ | nunez 2 hours ago | parent | prev | next [-] |
| Lol; like heck we are. Try scraping the NYTimes at LLM scale. You can time how quickly you’ll get 420’ed or, at worst, hit with a C&D. |
|
| ▲ | loremium 25 minutes ago | parent | prev | next [-] |
| reminds me of `don't look up` a bit. there clearly is an imbalance in regards to licenses with model providers, not even talking about knowledge extraction (yes younger people don't learn properly now, older generations forget) shortly before the rug-pull happens in form of accessibility to not rich people |
|
| ▲ | andersonpico an hour ago | parent | prev | next [-] |
| Your selective respect for work is a glaring double standard. The effort to produce the original content they scraped is order of magnitudes bigger than what it took to train the model, so if this wasn't enough to protect the authors from Anthropic it shouldn't be enough to protected Anthropic from people distillating their models. Your legal argument is all over the place as well. What is more relevant here: what the courts ruled or what you consider obvious? How is distillation less transformative than scraping? How does courts ruling that scraping to train models is legal relate to distillation? Nobody is scoring you on neutrality points for not using AI much and calling this doomerism is just a thought-terminating cliche that refuses to engage with the comment you're replying. In fact, your comment is not engaging with anything at all, you're vaguely gesturing towards potentitial arguments without making them. If you find discussing this exhausting then don't but also don't flood the comments with low effort whining. |
|
| ▲ | unethical_ban 2 hours ago | parent | prev [-] |
| Let's talk ethics, not law. Why is it okay for these companies to pirate books and scrape the entire web and offer synthesized summaries of all of it, lowering traffic and revenue for countless websites and professions of experts, but it is not okay for others to try to do the same to an AI model? Is the work of others less valid than the work of a model? |
| |
| ▲ | gruez an hour ago | parent | next [-] | | >Why is it okay for these companies to pirate books Courts have ruled it's not, and I don't think anyone is arguing it's okay. >but it is not okay for others to try to do the same to an AI model? The steelman version is that it's okay to do it once you acquired the data somehow, but that doesn't mean anthropic can't set up roadblocks to frustrate you. | |
| ▲ | p1esk 2 hours ago | parent | prev | next [-] | | I don’t see why it’s not ok to do that to an AI model. Or are you asking why they don’t want you to do it? | |
| ▲ | sfn42 2 hours ago | parent | prev [-] | | I don't think anyone's saying it's not okay - I think the point is that Anthropic has every right to create safeguards against it if they want to - just like the people publishing other information are free to do the same. And everyone is free to consume all the free information. |
|