Remix.run Logo
dannyobrien 4 hours ago

Metabrainz is a great resource -- I wrote about them a few years ago here: https://www.eff.org/deeplinks/2021/06/organizing-public-inte...

There's something important here in that a public good like Metabrainz would be fine with the AI bots picking up their content -- they're just doing it in a frustratingly inefficient way.

It's a co-ordination problem: Metabrainz assumes good intent from bots, and has to lock down when they violate that trust. The bots have a different model -- they assume that the website is adversarially "hiding" its content. They won't believe a random site when it says "Look, stop hitting our API, you can pick all of this data in one go, over in this gzipped tar file."

Or better still, this torrent file, where the bots would briefly end up improving the shareability of the data.

tux 4 hours ago | parent | next [-]

Yeah AI scrapers is one of the reasons why i have closed my public website https://tvnfo.com and only left donors site online. It’s not only because of AI scrapers but i grew tired of people trying to scrape the site eating a lot of reasorcers this small project don’t have. Very sad really it was publicly online since 2016. Now it’s only available for donors. Running a tiny project on just $60 a month. If this was not my hobby i would close it completely long time ago :-) Who know if there is more support in the future i might reopen public site again with something like anubes bot protection. But i thought it was only small sites like mine who gets hit hard, looks like many have similar issues. Soon nothing will be open or useful online. I wonder if this was the plan all along whoever pushing AI on massive scale.

fartfeatures 4 hours ago | parent | prev | next [-]

> They won't believe a random site when it says "Look, stop hitting our API, you can pick all of this data in one go, over in this gzipped tar file."

What mechanism does a site have for doing that? I don't see anything in robots.txt standard about being able to set priority but I could be missing something.

arjie 4 hours ago | parent | next [-]

The only real mechanism is "Disallow: /rendered/pages/*" and "Allow: /archive/today.gz" or whatever and there is no communication that the latter is the former. There is no machine-standard AFAIK that allows webmasters to communicate to bot operators in this detail. It would be pretty cool if standard CMSes had such a protocol to adhere to. Install a plugin and people could 'crawl' your Wordpress from a single dump or your Mediawiki from a single dump.

sbarre 2 hours ago | parent [-]

A sitemap.xml file could get you most of the way there.

jacksnipe 4 hours ago | parent | prev | next [-]

It’s not great, but you could add it to the body of a 429 response.

VTimofeenko 4 hours ago | parent [-]

Genuinely curious: do programs read bodies of 429 responses? In the code bases that I have seen, 429 is not read beyond the code itself

jakelazaroff 4 hours ago | parent | next [-]

Sometimes! The server can also send a retry-after header to indicate when the client is allowed to request the resource again: https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...

deathanatos 4 hours ago | parent [-]

… which isn't part of the body of a 429…

VTimofeenko 3 hours ago | parent [-]

Well, to be fair, I did say "is not read beyond the code itself", header is not the code, so retry-after is a perfectly valid answer. I vaguely remember reading about it, but I don't recall seeing it used in practice. MDN link shows that Chrome derivatives support that header though, which makes it pretty darn widespread

gleenn 4 hours ago | parent | prev [-]

Almost certainly not by default, certainly not in any of the http libs I have used

squigz 4 hours ago | parent | prev [-]

The mechanism is putting some text that points to the downloads.

TeMPOraL 4 hours ago | parent [-]

So perhaps it's time to standardize that.

squigz 4 hours ago | parent | next [-]

I'm not entirely sure why people think more standards are the way forward. The scrapers apparently don't listen to the already-established standards. What makes one think they would suddenly start if we add another one or two?

TeMPOraL 3 hours ago | parent [-]

There is no standard, well-known way for a website to advertise, "hey, here's a cached data dump for bulk download, please use that instead of bulk scraping". If they were, I'd expect the major AI companies and other users[0] to use that method for gathering training data[1]. They have compelling reasons to: it's cheaper for them, and cultivates goodwill instead of burning it.

This also means that right now, it could be much easier to push through such standard than ever before: there are big players who would actually be receptive to it, so even few not-entirely-selfish actors agreeing on it might just do the trick.

--

[0] - Plenty of them exist. Scrapping wasn't popularized by AI companies, it's standard practice of on-line business in competitive markets. It's the digital equivalent of sending your employees to competing stores undercover.

[1] - Not to be confused with having an LLM scrap specific page for some user because the user requested it. That IMO is a totally legitimate and unfairly penalized/villified use case, because LLM is acting for the user - i.e. it becomes a literal user agent, in the same sense that web browser is (this is the meaning behind the name of "User-Agent" header).

ethin 3 hours ago | parent [-]

You do realize that these AI scrapers are most likely written by people who have no idea what they're doing right? Or they just don't care? If they were, pretty much none of the problems these things have caused would exist. Even if we did standardize such a thing, I doubt they would follow it. After all, they think they and everyone else has infinite resources so they can just hammer websites forever.

fartfeatures 3 hours ago | parent [-]

I realise you are making assertions for which you have no evidence. Until a standard exists we can't just assume nobody will use it, particularly when it makes the very task they are scraping for simpler and more efficient.

edoceo 2 hours ago | parent | prev [-]

I'm in favor of /.well-known/[ai|llm].txt or even a JSON or (gasp!) XML.

Or even /.well-known/ai/$PLATFORM.ext which would have the instructions.

Could even be "bootstrapped" from /robots.txt

zzo38computer 3 hours ago | parent | prev | next [-]

> They won't believe a random site when it says "Look, stop hitting our API, you can pick all of this data in one go, over in this gzipped tar file."

Is there a mechanism to indicate this? The "a" command in the Scorpion crawling policy file is meant for this purpose, but that is not for use with WWW. (The Scorpion crawling policy file also has several other commands that would be helpful, but also are not for use with WWW.)

There is also the consideration to know what interval they will be archived that can be downloaded in this way; for data that changes often, you will not do it every time. This consideration is also applicable for torrents, since a new hash will be needed for a new version of the file.

m463 3 hours ago | parent | prev | next [-]

> Or better still, this torrent file, where the bots would briefly end up improving the shareability of the data.

that is an amazing thought.

toofy 2 hours ago | parent | prev | next [-]

> The bots have a different model -- they assume that the website is adversarially "hiding" its content.

this should give us pause. if a bot considers this adversarial and is refusing to respect the site owners wishes, thats a big part of the problem.

a bot should not consider that “adversarial”

hamdingers an hour ago | parent | prev [-]

> they assume that the website is adversarially "hiding" its content. They won't believe a random site when it says "Look, stop hitting our API, you can pick all of this data in one go, over in this gzipped tar file."

I'm not sure why you're personifying what is almost certainly a script that fetches documents, parses all the links in them, and then recursively fetches all of those.

When we say "AI scraper" we're describing a crawler controlled by an AI company indiscriminately crawling the web, not a literal AI reading and reasoning about each page... I'm surprised this needs to be said.

ryantgtg an hour ago | parent [-]

It doesn’t need to be said.