| ▲ | DrammBA 6 hours ago |
| Anthropic always summarizes the reasoning output to prevent some distillation attacks |
|
| ▲ | jdiff 5 hours ago | parent | next [-] |
| Genuine question, why have you chosen to phrase this scraping and distillation as an attack? I'm imagining you're doing it because that's how Anthropic prefers to frame it, but isn't scraping and distillation, with some minor shuffling of semantics, exactly what Anthropic and co did to obtain their own position? And would it be valid to interpret that as an attack as well? |
| |
| ▲ | irthomasthomas 5 hours ago | parent | next [-] | | If you ask claude in chinese it thinks its deepseek. | |
| ▲ | DrammBA 4 hours ago | parent | prev | next [-] | | > I'm imagining you're doing it because that's how Anthropic prefers to frame it Correct. > would it be valid to interpret that as an attack as well? Yup. | |
| ▲ | fragmede 2 hours ago | parent | prev [-] | | Firehosing Anthropic to exfiltrate their model seems materially different than Anthropic downloading all of the Internet to create the model in the first place to me. But maybe that's just me? | | |
| ▲ | jdiff 22 minutes ago | parent | next [-] | | I don't see the material difference in firehosing anthropic vs anthropic firehosing random sites on the internet. As someone who runs a few of those random sites, I've had to take actions that increase my costs (and burn my time) to mitigate a new host of scrapers constantly firing at every available endpoint, even ones specifically marked as off limits. | |
| ▲ | robrenaud an hour ago | parent | prev [-] | | Yeah, it's different. Anthropic profits when it delivers tokens. Hosting providers pay when Anthropic scrapes them. |
|
|
|
| ▲ | vintermann 5 hours ago | parent | prev | next [-] |
| Attacks? That's a choice of words. |
| |
| ▲ | DrammBA 5 hours ago | parent [-] | | Definitely Anthropic playing the victim after distilling the whole internet. |
|
|
| ▲ | butlike 4 hours ago | parent | prev | next [-] |
| Proprietary pattern matcher proves there's no moat; promptly pre-covers other's perception. |
|
| ▲ | nyc_data_geek1 6 hours ago | parent | prev | next [-] |
| Very cool that these companies can scrape basically all extant human knowledge, utterly disregard IP/copyright/etc, and they cry foul when the tables turn. |
| |
| ▲ | butlike 4 hours ago | parent | next [-] | | All extant human knowledge SO FAR. Remember, by the nature of the beast, the companies will always be operating in hindsight with outdated human knowledge. | |
| ▲ | stavros 5 hours ago | parent | prev [-] | | Yep, that is exactly what happens. It's a disgrace that their models aren't open, after training on everything humanity has preserved. They should at least release the weights of their old/deprecated models, but no, that would be losing money. | | |
| ▲ | copperx 2 hours ago | parent [-] | | We should treat LLM somewhat like patents or drugs. After 5 years or so, the models should become open source. Or at very least the weights. To compensate for the distilling of human knowledge. |
|
|
|
| ▲ | MasterScrat 6 hours ago | parent | prev [-] |
| and so does OpenAI |