Remix.run Logo
lucb1e a day ago

Huh that's interesting: 4.5 seconds for the TCP handshake and an additional 9.2 seconds for the TLS handshake. Is this some kind of captcha, since most bots would disconnect before that, so if you complete it once then it knows you're good? (Until the bots catch on of course, but so long as it works it's relatively unintrusive and not discriminatory against uncommon client software (that is, non-Chrome/ium).) The rest of the requests were lightning fast

Edit: welcome to your first comment after 9 years on HN btw, nice to have you here!

gluejar 2 hours ago | parent | next [-]

traffic yesterday ~20% more than recent average. 4971601 sessions 177 robots 863462 robot files 3390115 user files 20.30% robot files (robots id'd based on requests/ip address) 5 apache servers for static content, 1 CherryPy server for dynamic content hosted at iBiblio.

codys a day ago | parent | prev | next [-]

I think their site is just slow, potentially because more people than they are used to are trying to view it.

I was unable to load it initially (got an error from firefox) and had to re-attempt. Still slow if one forces a reload (shift-r, etc, to not use local cache).

JSeiko a day ago | parent | prev [-]

we are having occasional lows in page speed performance due to LARGE amounts of bot traffic. full disclosure - we've not really been able to resolve this fully/well. Let us know if you have a good idea for how to deal with it

uyzstvqs 5 hours ago | parent | next [-]

How do you currently host everything? Your main web server should not be responsible for hosting content. All books should be hosted on mirrors, and clicking download should automatically select a mirror to download it from.

Furthermore:

* Make sure that all books are downloadable in bulk as torrents.

* Every day, generate a CSV file of all available books and their metadata. Distribute this so that bots and user clients can run queries locally, instead of using your search engine.

gropo a day ago | parent | prev | next [-]

Do you host a torrent?

I have about 50k of the books, I would have used a torrent of just the txt files if it was prominent.

gluejar 2 hours ago | parent [-]

we have a tarball of all text files - link posted somewhere here

dimava 20 hours ago | parent | prev | next [-]

If it's purely bot traffic, then Anubis could help

You could have seen it on some websites already

https://anubis.techaro.lol/

lucb1e 44 minutes ago | parent | next [-]

Just to add to the two negative replies, I find Anubis to be the only system that doesn't ever get in the way. My browsers have Javascript enabled and, so far, it never took more than a fraction of a second to complete the checks

Every other system I've run into has constant false positives, e.g. Google captchas will sometimes say I've failed and make me do the hardest level (if it wasn't giving me that already), Cloudflare regularly thinks I'm a bot, Codeberg blocked me before, Github signup captchas used to take ~15 minutes to complete and then still said "well you failed, try again", Github's general rate limiting has false positives (some days I browse a lot, other days little, and on the little days it'll sometimes go "slow down" with no recourse whatsoever, you're just blocked for an indeterminate amount of time), OpenStreetMap blocks my browser at work because I'm using Firefox ESR instead of latest stable and it finds that user agent string to be implausible, whatever the german railway operator uses since a few days is triggering on me constantly, etc.,

etc.,

etc. Constant blocks everywhere.

With Anubis, my understanding is that you do the proof of work (with whatever implementation you like, it doesn't have to be the Javascript one that they provide) and you can move on without ever doing any task yourself. The power consumption is a shame, but so long as attackers aren't even doing this much, the couple Joules it takes doesn't seem to be an issue

Of course, the attackers will evolve, but for now...

TheDong 17 hours ago | parent | prev | next [-]

anubis only works against lazy scrapers, and at a cost to your users. I'd prefer people not use it.

Bot traffic comes from machines that usually have a lot of idle cpu (since they're largely blocked on network IO as they scrape a bunch of sites in parallel), so they can trivially solve the anubis "proof of work" challenge, save the cookie, and then not solve it again for that site.

The only reason scrapers don't solve it is if the developers were too lazy to implement it... and modern scrapers also do, codeberg stopped using anubis because modern scrapers were updated to solve it.

The "proof of work" has to be easy or else people on old cell phones couldn't access your site (since an old android phone would start to overheat and throttle trying to solve a challenge that would take a modern server even several seconds), and it also consumes your cell-phone user's batteries, which is a really precious resource for them compared to the idle cpu on a server.

autoexec 13 hours ago | parent | prev [-]

Please no. I'm a non-bot who gets stopped and turned away all the time by that menace. Anubis doesn't work without JS.

One of the things I give duckduckgo a lot of credit for is that while they're quick to interrupt me for a bot check (sometimes multiple times in a span of minutes) they'll let me identify ducks even on the most locked down browsers I use.

lucb1e a day ago | parent | prev | next [-]

I'm only a small-scale sysadmin but the way that I understand the internet is that you send abuse notifications to the IP address block owner and, if it doesn't get resolved, you block. The whois/rdap database reveals which IPs all belong to the same hosting provider or ISP, so you can summarize that all to one list of IP addrs + timestamps per some time period

The ISP actually knows which subscriber is on that line, can send them notices, block them, terminate them... loads of things that you simply cannot do because you have no relation to this person. And frankly I wouldn't want to need to have a personal relation with every website that I visit; my ISP can reach me if there is anything relevant to continued use of the internet. From personal experience, when I was a teenager, the ISP cutting our household off after an abuse report was an effective way of stopping what I was doing

miki123211 9 hours ago | parent | next [-]

The problem with this approach is that modern scrapers use hordes of residential proxies and quickly rotate through IP addresses which belong to ASes you get a lot of real traffic from. There's nothing you can do if the ISP won't take any action against the customer.

lucb1e an hour ago | parent | next [-]

I know. All the more reason to do it, right? If an ISP can't keep its network clean, then allowing them to send traffic onto the web is just asking for the problem to continue

Show people a useful error, such as "You are using [ISP name] which sends large volumes of abusive traffic (think of spam and DDoS). They allow the attackers to hop around points across their entire network so we cannot block the abusers more selectively. Despite our attempts to contact them, the abuse continues in volumes which we do not see from other ISPs. To access our corner of the internet, use a different ISP. You could try mobile data instead of Wi-Fi or vice versa.", and they can make their own choices about staying with this ISP if more and more websites show this sort of error

If everyone tries to identify people piecemeal, we all need to implement ~200 different identification systems (assuming each country has a central system that everyone is signed up to in the first place), or rely on algorithms to tell who is a bot (I'm currently being misidentified on a daily basis and I'm, eh, not a bot. Trying to buy public transport tickets is currently difficult, for example, because the monopolist in my country blocks me after a few route queries when using a Google browser, and 0 queries from Firefox)

tangledhelix 5 hours ago | parent | prev [-]

Worse than that - even if they would take action, you can't possibly orchestrate filing all of the complaints. It's a drown-in-quicksand problem, you can't fight quicksand one grain at a time.

lucb1e an hour ago | parent [-]

> you can't possibly orchestrate filing all of the complaints

To the ISPs? Each IP range has an abuse email address registered and this is specifically exempt from rate limiting at RIPE's WHOIS server. Not sure how it is in other RIRs but I just happen to know of this policy

You can automate the whole thing, provided that you have a reliable way of identifying the undesired traffic which you need anyway for being able to block it by any means. The trouble is in user identification (they'll just use a new IP address from that ISP or hosting provider if you don't tell the provider about the problematic user)

tangledhelix an hour ago | parent [-]

See what I wrote above (and let me say I am talking about Project Gutenberg and Distributed Proofreaders here, I am one of the admins on both). A large amount of the hassle traffic we've seen is as I wrote above, the IPs come from everywhere and in many cases, each IP makes a single request and doesn't come back. They change user-agent dynamically, etc, to masquerade as regular traffic. They come from residential, cloud/hyperscale, corporate, educational, government, all the networks, on every continent. This is many thousands of "open a ticket with someone" events per hour territory. It's as difficult to fight as DDoS itself for the same reasons (presumably the harvesting parties know that and that's exactly why this approach is used).

Others online have been writing about their own experience with the same stuff; it's not unique to PG at all, it's everywhere. Talk to anyone that runs a web server and they'll have these stories...

lucb1e 30 minutes ago | parent [-]

I'm aware, I also host various websites that see an IP do a single request to the most unlikely of deep pages. Usually not hard to correlate with similar surprising requests from the same ISP, though, and that's exactly why it would be useful to talk to them: they know who used that IP address at the given timestamp. If they get a hundred complaints from different websites, the ISP is in the unique position to correlate that and find the subscriber(s) that are problematic

You also don't have to send out 1k support requests per hour. Could trial it with some hosting provider that you expect is responsive and see how it works out

edit: like, I just don't see another solution short of banning being anonymous online. Each site would have to know who you are. Someone has to be able to track it back to a person that is doing the abuse or there can't be any rules that we can apply. Imo it's better if that's the ISP (or VPN provider, say) who already has this information anyway

Jolter a day ago | parent | prev [-]

It’s effective against teenagers maybe. Not so much against Amazon, Meta or wherever botnet/crawler is coming out of China these days from up-and-coming AI companies.

lucb1e an hour ago | parent | next [-]

Then block all of Amazon, Meta, or wherever botnet/crawling traffic is coming from that doesn't honor robots.txt, sends DDoS reflection traffic, submits SMTP messages (in large volumes, not just probing) for domains they're not authorized for with SPF, or whatever else applies to the protocol you're using

If they can't keep their ranges clean to a reasonable degree, their customers will need to move if they want to access your part of the internet. New sign-ups will always be hard, so some amount of abuse is expected, but if it's the same abuse traffic for weeks after you've notified them, well, it stops being your problem at some point

Jolter an hour ago | parent [-]

See the other comments in this thread. The perpetrators are unknown and are jumping between residential IPs. Possibly botnets?

lucb1e an hour ago | parent [-]

Then see my other replies in the thread where I've specifically addressed residential IPs, e.g.: https://news.ycombinator.com/item?id=48163060

tonetegeatinst 21 hours ago | parent | prev [-]

I mean you could block entire AS numbers that relate to amazon or big tech datacenters

tangledhelix 20 hours ago | parent [-]

wouldn't help, much of the traffic we've observed look closer to ddos patterns - IPs from all over the world, many different networks, each IP makes one request only, doesn't come back. highly distributed, no form of blocking would be effective except maybe captcha or proof of work.

TurdF3rguson a day ago | parent | prev | next [-]

CF cache?

jimnotgym 11 hours ago | parent | prev [-]

I would love it if you could detect AI scraper bots, and feed them AI generated bs instead of the real books...

tangledhelix an hour ago | parent | next [-]

Cloudflare sells that as a product, they call it Labyrinth IIRC.

miki123211 9 hours ago | parent | prev [-]

This is very, very, very dangerous.

Occasionally, you misclassify a real user as a bot, and then your reputation is ruined forever.

The official Polish train schedules website did this recently, feeding incorrect departure and arrival times to IP addresses known for aggressive scraping, without taking CGNAT into account. People... have noticed[1].

[1] (Polish) https://zaufanatrzeciastrona.pl/post/kto-i-dlaczego-losuje-w...