▲ | Philpax 7 days ago | ||||||||||||||||||||||
The argument isn't that it's difficult for them to circumvent - it's not - but that it adds enough friction to force them to rethink how they're scraping at scale and/or self-throttle. I personally don't care about the act of scraping itself, but the volume of scraping traffic has forced administrators' hands here. I suspect we'd be seeing far fewer deployments if the scrapers behaved themselves to begin with. | |||||||||||||||||||||||
▲ | davidclark 7 days ago | parent [-] | ||||||||||||||||||||||
The OP author shows that the cost to scrape an Anubis site is essentially zero since it is a fairly simple PoW algorithm that the scraper can easily solve. It adds basically no compute time or cost for a crawler run out of a data center. How does that force rethinking? | |||||||||||||||||||||||
|