Remix.run Logo
firecall 8 hours ago

Sadly, AI bots and crawlers have made CF the only affordable way to actually keep my sites up without incurring excessive image serving costs.

Those TikTok AI crawlers were destroying some of my sites.

Millions of images served to ByteSpider bots, over and over again. They wouldn't stop. It was relentless abuse. :-(

Now I've just blocked them all with CF.

flakeoil 8 hours ago | parent | next [-]

> Now I've just blocked them all with CF.

Yeah, they for sure let nothing through right now. ;)

marcosdumay 8 hours ago | parent [-]

There isn't too much of a difference from their normal behavior.

zenmac 8 hours ago | parent | prev | next [-]

Wouldn't it be trivial to just to write a uwf to block the crawler ips?

At time like this really glad we self-hosted.

cornedor 8 hours ago | parent | next [-]

No, since they're simply too many. For an e-commerce site I work for, we once had an issue where some bad-actor tried to crawl the site to set up scam shops. The list of IPs were way too broad, and the user-agents way too generic or random.

72deluxe 8 hours ago | parent [-]

Could you not also use an ASN list like https://github.com/brianhama/bad-asn-list and add blocks of IPs to a blocklist (eg. ipset on Linux)? Most of the scripty traffic comes from VPSs.

jeroenhd 6 hours ago | parent [-]

Thanks to widespread botnets, most scrapers fall back to using "residential proxies" the moment you block their cloud addresses. Same load, but now you risk accidentally blocking customers coming from similar net blocks.

Blocking ASNs is one step of the fight, but unfortunately it's not the solution.

immibis an hour ago | parent [-]

Hypothetically, as a cyber-criminal, I'd like to thank the blacklist industry for bringing so much money into criminal enterprises by making residential proxies mandatory for all scraping.

tpetry 8 hours ago | parent | prev | next [-]

Its not one IP to block. Its thousands! And they're also scatter through different ip networks so no simple cidr block is possible. Oh, and just for the fun, when you block their datacenter ips they switch to hundreds of residential network ips.

Yes, they are really hard to block. In the end I switched to Cloudflare to just so they can handle this mess.

Bender 7 hours ago | parent | prev | next [-]

Wouldn't it be trivial to just to write a uwf to block the crawler ips?

Probably more effective would be to get the bots to exclude your IP/domain. I do this for SSH, leaving it open on my public SFTP servers on purpose. [1] If I can get 5 bot owners to exclude me that could be upwards of 250k+ nodes mostly mobile IP's that stop talking to me. Just create something that confuses and craps up the bots. With SSH bots this is trivial as most SSH bot libraries and code are unmaintained and poorly written to begin with. In my ssh example look for the VersionAddendum. Old versions of ssh, old ssh libraries and code that tries to implement ssh itself will choke on a long banner string. Not to be confused with the text banner file.

I'm sure the clever people here could make something similar for HTTPS and especially for GPT/LLM bots at the risk of being flagged "malicious".

[1] - https://mirror.newsdump.org/confuse-some-ssh-bots.html

About 90%+ of bots can not visit this URL, including real people that have disabled HTTP/2.0 in their browser.

firecall 8 hours ago | parent | prev [-]

Maybe :-)

But for a small operation, AKA just me, it's one more thing for me to get my head around and manage.

I don't run just one one website or one service.

It's 100s of sites across multiple platforms!

Not sure I could ever keep up playing AI Crawler and IP Whack-A-Mole!

UltraSane 40 minutes ago | parent | prev | next [-]

Can you use per-IP rate limiting?

immibis an hour ago | parent | prev | next [-]

How many requests is your site getting, and how long does your site require to process a request, and why is it that long?

unethical_ban 6 hours ago | parent | prev | next [-]

I don't understand. What exactly are they doing, what are their goals? I'm not trying to argue, I genuinely don't get it.

edit: I guess I understand "AI bots scraping sites for data to feed LLM training" but what about the image serving?

Aeolun 8 hours ago | parent | prev [-]

> Now I've just blocked them all with CF.

You realize it was possible to block bad actors before Cloudflare right? They just made it easier, not possible in the first place.

firecall 8 hours ago | parent | next [-]

Of course :-)

And my image CDN blocked ByteSpider for me.

For a while I also blocked the entirety of Singapore due to all the bots coming out of AWS over there!

But it's honestly something I just dont need to be thinking about for every single site I run across a multitude of platforms.

Having said that, I will now look at the options for the business critical services I operate for clients!

delfinom 7 hours ago | parent | prev [-]

Bad actors now have access to tens of thousands of IPs and servers on the fly.

The cost of hardware and software resources these days is absolute peanuts compared to 10 years ago. Cloud services and APIs has made managing them also trivial as hell.

Cloudflare is simply a evolution in response to the other side also having evolved greatly, both legitimate and illegitimate users.