Remix.run Logo
asphero 15 hours ago

Interesting approach. The scraper-vs-site-owner arms race is real.

On the flip side of this discussion - if you're building a scraper yourself, there are ways to be less annoying:

1. Run locally instead of from cloud servers. Most aggressive blocking targets VPS IPs. A desktop app using the user's home IP looks like normal browsing.

2. Respect rate limits and add delays. Obvious but often ignored.

3. Use RSS feeds when available - many sites leave them open even when blocking scrapers.

I built a Reddit data tool (search "reddit wappkit" if curious) and the "local IP" approach basically eliminated all blocking issues. Reddit is pretty aggressive against server IPs but doesn't bother home connections.

The porn-link solution is creative though. Fight absurdity with absurdity I guess.

socialcommenter 3 hours ago | parent | next [-]

Without wanting to upset anyone - what makes you interested in sharing tips for team scraper?

(Overgeneralising a bit) site owners are mostly cting for public benefit whereas scrapers act for their own benefit/for private interests.

I imagine most people would land on team site-owner, if they were asked. I certainly would.

P.S. is the best way to scrape fairly just to respect robots.txt?

n1xis10t 9 minutes ago | parent [-]

I think "scraper vs siteowners" is a false dichotomy. Scrapers will always need to exist as long as we want search engines and archival services. We will need small versions of these services to keep popping up every now and then to keep the big guys on their toes, and the smaller guys need advice for scraping politely.

rhdunn 7 hours ago | parent | prev [-]

Plus simple caching to not redownload the same file/page multiple times.

It should also be easy to detect a forejo, gitea, or similar hosting site, locate the git URL and clone the repo.