Remix.run Logo
martinald 8 hours ago

Many reasons but DDoS protection has massive network effects. The more customers you have (and therefore bandwidth provision) the easier it is to hold up against a DDoS, as DDoS are targeting just one (usually) customer.

So there are massive economies of scale. Small CDN with (say) 10,000 customers and 10mbit/sec per customer can handle 100gbit/s DDoS (way too simplistic, but hopefully you get the idea) - way too small.

If you have the same traffic provisioned on average per customer and have 1 million customers, you can handle a DDoS 100x the size.

Only way to compete with this is to massively overprovision bandwidth per customer (which is expensive, as those customers won't pay more just for you to have more redundancy because you are smaller).

In a way (like many things in infrastructure) CDNs are natural monopolies. The bigger you get -> the more bandwidth and PoP you can have -> more attractive to more customers (this repeats over and over).

It was probably very astute of Cloudflare to realise that offering such a generous free plan was a key step in this.

kordlessagain 7 hours ago | parent | next [-]

Your argument is technically flawed.

In a CDN, customers consume bandwidth; they do not contribute it. If Cloudflare adds 1 million free customers, they do not magically acquire 1 million extra pipes to the internet backbone. They acquire 1 million new liabilities that require more infrastructure investment.

All you are doing is echoing their pitch book. Of course they want to skim their share of the pie.

__alexs 6 hours ago | parent | next [-]

I imagine every single customer is provisioned based on some peak expected typical traffic and that's what they base their capital investment in bandwidth on.

However most customers are rarely at their peak, this gives you tremendous spare capacity to use to eat DDoS attacks, assuming that the attacks are uncorrelated. This gives you huge amounts of capacity that's frequently doing nothing. Cloudflare advertise this spare capacity as "DDoS protection."

I suppose in theory it might be possible to massively optimise utilisation of your links, but that would be at the cost of DDoS protection and might not improve your margin very meaningfully, especially is customers care a lot about being online.

bawolff 5 hours ago | parent | prev | next [-]

> In a CDN, customers consume bandwidth; they do not contribute it

They contribute money which buys infrastructure.

> If Cloudflare adds 1 million free customers,

Is the free tier really customers? Regardless most of them are small that it doesn't cost cloudflare much anyways. The infrastructure is already there anyways. Its worth it to them for the good will it generates which leads to future paying customers. It probably also gives them visibility into what is good vs bad traffic.

1 million small sites could very well cost less to cloudflare than 1 big site.

LMYahooTFY 5 hours ago | parent | prev | next [-]

You're missing the economies of scale part.

OP is saying it's cheaper overall for a 10 million customer company to add infrastructure for 1 million more than it is for a 10,000 customer company to add infrastructure for 1000 more people.

If you're looking at this as a "share of the pie", it's probably not going to make sense. The industry is not zero sum.

7 hours ago | parent | prev | next [-]
[deleted]
jiveturkey 3 hours ago | parent | prev [-]

You aren't understanding economy of scale, and peak to average ratios.

The same reason I use cloud compute -- elastic infrastructure because I can't afford the peaks -- is the same reason large service providers "work".

It's funny how we always focus on Cloudflare, but all cloud providers have this same concentration downside. I think it's because Cloudflare loves to talk out of both sides of their mouth.

kordlessagain 2 hours ago | parent [-]

The "economies of scale" defense of Cloudflare ignores a fundamental reality: 23.8 million websites run on Cloudflare's free tier versus only 210,000 paying customers or so. Free users are not a strategic asset. They are an uncompensated cost, full stop. Cloudflare doesn't absorb this loss out of altruism; they monetize it by building AI bot-detection systems, charging for bot mitigation, and extracting threat intelligence data. Today's outage was caused by a bug in Cloudflare's service to combat bots.

That's AI bots, BTW. Bots like Playwright or Crawl4AI, which provide a useful service to individuals using agentic AI. Cloudflare is hostile to these types of users, even though they likely cost websites nothing to support well.

The "scale saves money" argument commits a critical error: it counts only the benefits of concentration while externally distributing the costs.

Yes, economies of scale exist. But Cloudflare's scale creates catastrophic systemic risk that individual companies using cloud compute never would. An estimated $5-15 billion was lost for every hour of the outage according to Tom's Guide. That cost didn't disappear. It was transferred to millions of websites, businesses, and users who had zero choice in the matter.

Again, corporations shitting on free users. It's a bad habit and a dark pattern.

Even worse, were you hoping to call an Uber this morning for your $5K vacation? Good luck.

This is worse than pure economic inefficiency. Cloudflare operates as an authorized man-in-the-middle to 20% of the internet, decrypting and inspecting traffic flows. When their systems fail, not due to attacks, but to internal bugs in their monetization systems, they don't just lose uptime.

They create a security vulnerability where encrypted connections briefly lose their encryption guarantee. They've done this before (Cloudbleed), and they'll do it again. Stop pretending to have rational arguments with irrational future outcomes.

The deeper problem: compute, storage, and networking are cheap. The "we need Cloudflare's scale for DDoS protection" argument is a circular justification for the very concentration that makes DDoS attractive in the first place. In a fragmented internet with 10 CDNs, a successful DDoS on one affects 10% of users. In a Cloudflare-dependent internet, a DDoS, or a bug, affects 50%, if Cloudflare is unable to mitigate (or DDoSs themselves).

Cloudflare has inserted themselves as an unremovable chokepoint. Their business model depends on staying that chokepoint. Their argument for why they must stay a chokepoint is self-reinforcing. And every outage proves the model is rotten.

jiveturkey 7 minutes ago | parent [-]

hang on, you're reading some kind of cloudflare advocacy in my post. apologies if i implied that. i don't like to come off as a crank is all. IMO cloudflare is an evil that needs to be defeated. i'm just explaining how their business model "works" and why massive economy of scale matters, to support the GP poster.

i don't even think they are evil because of the concentration of power, that's just a problematic issue. the evil part is they convince themselves they aren't the bad guys. that they are saving us from ourselves. that the things they do are net positives, or even absolute positives. like the whole "let's defend the internet from AI crawlers" position they appointed themselves sheriff on, that i think you're referencing. it's an extremely dangerous position we've allowed them to occupy.

> they monetize it

yes, and they can't do this without the scale.

> scale saves money

any company, uber for example, can design their infra to not rely on a sole provider. but why? their customers aren't going to leave in droves when a pretty reliable provider has the occasional hiccup. so it's not worth the cost, so why shouldn't they externalize it? uber isn't in business to make the internet a better place. so yes, scale does save money. you're arguing something at a higher principle than how architectural decisions are made.

i'm not defending economy of scale as a necessary evil. i'm just backing up that it's how cloudflare is built, and that it is in fact useful to customers.

karmelapple 6 hours ago | parent | prev | next [-]

And how many companies want to also be able to build out their own CDN?

Not every company can be an expert at everything.

But perhaps many of us could buy a different CDN than the major players if we want to reduce the likelihood of mass outages like this though.

codedokode 8 hours ago | parent | prev [-]

In my opinion, DDoS is possible only because there is no network protocol for a host to control traffic filtering on upstream providers (deny traffic from certain subnets or countries). In this case everybody would prefer write their own systems rather than rely on a harmful monopoly.

gnfargbl 7 hours ago | parent | next [-]

The recent Azure DDoS used 500k botnet IPs. These will have been widely distributed across subnets and countries, so your blocking approach would not have been an effective mitigation.

Identifying and dynamically blocking the 500k offending IPs would certainly be possible technically -- 500k /32s is not a hard filtering problem -- but I seriously question the operational ability of internet providers to perform such granular blocking in real-time against dynamic targets.

I also have concerns that automated blocking protocols would be widely abused by bad actors who are able to engineer their way into the network at a carrier level (i.e. certain governments).

__alexs 6 hours ago | parent | next [-]

> 500k /32s is not a hard filtering problem

Is this really true? What device in the network are you loading that filter into? Is it even capable of handling the packet throughput of that many clients while also handling such a large block list?

nine_k 2 hours ago | parent [-]

But this is not one subnet. It is a large number of IPs distributed across a bunch of providers, and handled possibly by dozens if not hundreds of routers along the way. Each of these routers won't have trouble blocking a dozen or two IPs that would be currently involved in a DDoS attack.

But this would require a service like DNSBL / RBL which email providers use. Mutually trusting big players would exchange lists of IPs currently involved in DDoS attacks, and block them way downstream in their networks, a few hops from the originating machines. They could even notify the affected customers.

But this would require a lot of work to build, and a serious amount of care to operate correctly and efficiently. ISPs don't seem to have a monetary incentive to do that.

tw04 7 hours ago | parent | prev [-]

It also completely overlooks the fact that some of the traffic has spoofed source IP addresses and a bad actor could use automated black holing to knock a legitimate site offline.

codedokode 6 hours ago | parent [-]

> a bad actor could use automated black holing to knock a legitimate site offline.

No, in my concept the host can only manage the traffic targeted at it and not at other hosts.

tw04 2 hours ago | parent [-]

That already exists… that's part of cloudflare and other vendors mitigation strategy. There’s absolutely no chance ISPs are going to extend that functionality to random individuals on the internet.

peanut-walrus 7 hours ago | parent | prev | next [-]

What traffic would you request the upstream providers to block if getting hit by Aisuru? Considering the botnet consists of residential routers, those are the same networks your users will be originating from. Sure, in best case, if your site is very regional, you can just block all traffic outside your country - but most services don't have this luxury.

Blocking individual IP addresses? Sure, but consider that before your service detects enough anomalous traffic from one particular IP and is able to send the request to block upstream, your service will already be down from the aggregate traffic. Even a "slow" ddos with <10 packets per second from one source is enough to saturate your 10Gbps link if the attacker has a million machines to originate traffic from.

codedokode 6 hours ago | parent | next [-]

In many cases the infected devices are in developing countries where none of your customers is. Many sites are regional, for example, a medium business operating within one country, or even city.

And even if the attack comes from your country, it is better to block part of the customers and figure out what to do next rather than have your site down.

amaccuish 7 hours ago | parent | prev [-]

Could it not be argued that ISPs should be forced to block users with vulnerable devices?

They have all the data on what CPE a user has, can send a letter and email with a deadline, and cut them off after it expires and the router has not been updated/is still exposed to the wide internet.

hombre_fatal 5 hours ago | parent | next [-]

My dad’s small town ISP called him to say his household connection recently started saturating the link 24/7 and to look into whether a device had been compromised.

(Turns out some raspi reseller shipped a product with empty uname/password)

While a cute story, how do you scale that? And what about all the users that would be incapable of troubleshooting it, like if their laptop, roku, or smart lightbulb were compromised? They just lose internet?

And what about a botnet that doesn’t saturate your connection, how does your ISP even know? They get full access to your traffic for heuristics? What if it’s just one curl request per N seconds?

Not many good answers available if any.

mschuster91 3 hours ago | parent [-]

> While a cute story, how do you scale that? And what about all the users that would be incapable of troubleshooting it, like if their laptop, roku, or smart lightbulb were compromised? They just lose internet?

Uh, yes. Exactly and plainly that. We also go and suspend people's driver licenses or at the very least seriously fine them if they misbehave on the road, including driving around with unsafe cars.

Access to the Internet should be a privilege, not a right. Maybe the resulting anger from widespread crackdowns would be enough of a push for legislators to demand better security from device vendors.

> And what about a botnet that doesn’t saturate your connection, how does your ISP even know?

In ye olde days providers had (to have to) abuse@ mailboxes. Credible evidence of malicious behavior reported to these did lead to customers getting told to clean up shop or else.

6 hours ago | parent | prev | next [-]
[deleted]
SJC_Hacker 6 hours ago | parent | prev | next [-]

Xfinity did exactly this to me a few years ago. I wasn't compromised but tried running a blockchain node on my machine. The connection to the whole house was blocked off until I stopped it.

encom 6 hours ago | parent | prev [-]

It could be argued that ISPs should not snoop on my traffic, barring a court order.

powerpixel 8 hours ago | parent | prev [-]

> here is no network protocol for a host to control traffic filtering on upstream providers (deny traffic from certain subnets or countries).

There is no network protocol per se, but there is commercial solutions like fortinet that can block countries iirc, but to note that it's only ip range based so it's not worth a lot

mrktf 7 hours ago | parent | next [-]

I think parent means: there no network protocol which can propagate blocking in sane manner between providers (something like bgp for firewalls)

edit: yes, you can you bgp to blockhole subnet traffic - the standard doesn't play well if you want blackhole unrelated subnets from upstream network

wbl 8 hours ago | parent | prev | next [-]

Unless you filter at the far end of the bottleneck you still go offline.

jabart 7 hours ago | parent | prev [-]

I'm pretty sure BGP magic will let you blackhole a whole subnet.