| ▲ | mrweasel 2 hours ago | |
I don't know about others, but we have special rules for Google, Bing, and a few others, rate-limiting them less than some random bot. The problem is scrapers (mostly AI scrapers from what we can tell). They will pound a site into the ground and not care and they are becoming increasingly good at hiding their tracks. The only reasonable way to deal with them is to rate-limit every IP by default and then lifting some of those restrictions on known, well behaving bots. Now we will lift those restrictions if asked, and frequently look at statistics to lift the restrictions from search engines we might have missed, but it's an up hill battle if you're new and unknown. | ||