Remix.run Logo
goku12 3 days ago

Please remember that an LLM accessing any website isn't the problem here. It's the scraping bots that saturate the server bandwidth (a DoS attack of sorts) to collect data to train the LLMs with. An LLM solving a captcha or an Anubis style proof of work problem isn't a big concern here, because the worst they're going to do with the collected data is to cache them for later analysis and reporting. Unlike the crawlers, LLMs don't have any incentives in sucking up huge amounts of data like a giant vacuum cleaner.

TeMPOraL 3 days ago | parent [-]

Scraping was a thing before LLMs, there's a whole separate arms race around this for regular competition and "industrial espionage" reasons. I'm not really sure why model training would become a noticeable fraction of scrapping activity - there's only few players on the planet that can afford to train decent LLMs in the first place, and they're not going to re-scrape the content they already have ad infinitum.

int_19h 3 days ago | parent [-]

> they're not going to re-scrape the content they already have

That's true for static content, but much of it is forums and other places like that where the main value is that new content is constantly generated - but needs to be re-scraped.

a96 2 days ago | parent [-]

If only sites agreed on putting a machine readable URL somewhere that lists all items by date. Like a site summary or a syndication stream. And maybe like a "map" of a static site. It would be so easy to share their updates with other interested systems.

int_19h a day ago | parent [-]

Why should they agree to make life even easier for people doing something they don't want?