Remix.run Logo
otterley 3 hours ago

> work has already begun on how we will harden them against failures like this in the future. In particular we are:

> Hardening ingestion of Cloudflare-generated configuration files in the same way we would for user-generated input

> Enabling more global kill switches for features

> Eliminating the ability for core dumps or other error reports to overwhelm system resources

> Reviewing failure modes for error conditions across all core proxy modules

Absent from this list are canary deployments and incremental or wave-based deployment of configuration files (which are often as dangerous as code changes) across fault isolation boundaries -- assuming CloudFlare has such boundaries at all. How are they going to contain the blast radius in the future?

This is something the industry was supposed to learn from the CrowdStrike incident last year, but it's clear that we still have a long way to go.

Also, enabling global anything (i.e., "enabling global kill switches for features") sounds like an incredibly risky idea. One can imagine a bug in a global switch that transforms disabling a feature into disabling an entire system.

nikcub 3 hours ago | parent | next [-]

They require the bot management config to update and propagate quickly in order to respond to attacks - but this seems like a case where updating a since instance first would have seen the panic and stopped the deploy.

I wonder why clickhouse is used to store the feature flags here, as it has it's own duplication footguns[0] which could have also easily lead to a query blowing up 2/3x in size. oltp/sqlite seems more suited, but i'm sure they have their reasons

[0] https://clickhouse.com/docs/guides/developer/deduplication

HumanOstrich 3 hours ago | parent [-]

I don't think sqlite would come close to their requirements for permissions or resilience, to name a couple. It's not the solution for every database issue.

Also, the link you provided is for eventual deduplication at the storage layer, not deduplication at query time.

hedora 19 minutes ago | parent [-]

I think the idea is to ship the sqlite database around.

It’s not a terrible idea, in that you can test the exact database engine binary in CI, and it’s (by definition) not a single point of failure.

HumanOstrich 5 minutes ago | parent [-]

I think you're oversimplifying the problem they had, and I would encourage you to dive in to the details in the article. There wasn't a problem with the database, it was with the query used to generate the configs. So if an analogous issue arose with a query against one of many ad-hoc replicated sqlite databases, you'd still have the failure.

I love sqlite for some things, but it's not The One True Database Solution.

mewpmewp2 3 hours ago | parent | prev | next [-]

It seems they had this continous rollout for the config service, but the services consuming this were affected even by small percentage of these config providers being faulty, since they were auto updating every few minutes their configs. And it seems there is a reason for these updating so fast, presumably having to react to threat actors quickly.

otterley 3 hours ago | parent [-]

It's in everyone's interest to mitigate threats as quickly as possible. But it's of even greater interest that a core global network infrastructure service provider not DOS a significant proportion of the Internet by propagating a bad configuration too quickly. The key here is to balance responsiveness against safety, and I'm not sure they struck the right balance here. I'm just glad that the impact wasn't as long and as severe as it could have been.

tptacek 3 hours ago | parent [-]

This isn't really "configuration" so much as it is "durable state" within the context of this system.

otterley 3 hours ago | parent [-]

In my 30 years of reliability engineering, I've come to learn that this is a distinction without a difference.

People think of configuration updates (or state updates, call them what you will) as inherently safer than code updates, but history (and today!) demonstrates that they are not. Yet even experienced engineers will allow changes like these into production unattended -- even ones who wouldn't dare let a single line of code go live without being subject to the full CI/CD process.

HumanOstrich 2 hours ago | parent | next [-]

They narrowed down the actual problem to some Rust code in the Bot Management system that enforced a hard limit on the number of configuration items by returning an error, but the caller was just blindly unwrapping it.

otterley 2 hours ago | parent [-]

A dormant bug in the code is usually a condition precedent to incidents like these. Later, when a bad input is given, the bug then surfaces. The bug could have laid dormant for years or decades, if it ever surfaced at all.

The point here remains: consider every change to involve risk, and architect defensively.

tptacek 2 hours ago | parent [-]

They made the classic distributed systems mistake and actually did something. Never leap to thing-doing!

otterley 2 hours ago | parent [-]

If they're going to yeet configs into production, they ought to at least have plenty of mitigation mechanisms, including canary deployments and fault isolation boundaries. This was my primary point at the root of this thread.

And I hope fly.io has these mechanisms as well :-)

tptacek 2 hours ago | parent [-]

We've written at long, tedious length about how hard this problem is.

otterley 2 hours ago | parent [-]

Have a link?

tptacek 2 hours ago | parent [-]

Most recently, a few weeks ago (but you'll find more just a page or two into the blog):

https://fly.io/blog/corrosion/

otterley 2 hours ago | parent [-]

It's great that you're working on regionalization. Yes, it is hard, but 100x harder if you don't start with cellular design in mind. And as I said in the root of the thread, this is a sign that CloudFlare needs to invest in it just like you have been.

tptacek 2 hours ago | parent [-]

I recoil from that last statement not because I have a rooting interest in Cloudflare but because the last several years of working at Fly.io have drilled Richard Cook's "How Complex Systems Fail"† deep into my brain, and what you said runs aground of Cook #18: Failure free operations require experience with failure.

If the exact same thing happens again at Cloudflare, they'll be fair game. But right now I feel people on this thread are doing exactly, precisely, surgically and specifically the thing Richard Cook and the Cook-ites try to get people not to do, which is to see complex system failures as predictable faults with root causes, rather than as part of the process of creating resilient systems.

https://how.complexsystems.fail/

otterley 2 hours ago | parent [-]

Suppose they did have the cellular architecture today, but every other fact was identical. They'd still have suffered the failure! But it would have been contained, and the damage would have been far less.

Fires happen every day. Smoke alarms go off, firefighters get called in, incident response is exercised, and lessons from the situation are learned (with resulting updates to the fire and building codes).

Yet even though this happens, entire cities almost never burn down anymore. And we want to keep it that way.

As Cook points out, "Safety is a characteristic of systems and not of their components."

HumanOstrich an hour ago | parent | next [-]

What variant of cellular architecture are you referring to? Can you give me a link or few? I'm fascinated by it and I've led a team to break up a monolithic solution running on AWS to a cellular architecture. The results were good, but not magic. The process of learning from failures did not stop, but it did change (for the better).

No matter what architecture, processes, software, frameworks, and systems you use, or how exhaustively you plan and test for every failure mode, you cannot 100% predict every scenario and claim "cellular architecture fixes this". This includes making 100% of all failures "contained". Not realistic.

otterley an hour ago | parent [-]

If your AWS service is properly regionalized, that’s the minimum amount of cellular architecture required. Did your service ever fail in multiple regions simultaneously?

Cellular architecture within a region is the next level and is more difficult, but is achievable if you adhere to the same principles that prohibit inter-regional coupling:

https://docs.aws.amazon.com/wellarchitected/latest/reducing-...

https://docs.aws.amazon.com/wellarchitected/latest/reducing-...

HumanOstrich an hour ago | parent [-]

You didn't really put any thought into what I said. Thanks for the links.

otterley 41 minutes ago | parent [-]

It wasn't worth thinking about. I'm not going to defend myself against arguments and absolute claims I didn't make. The key word here is mitigation, not perfection.

hedora 12 minutes ago | parent [-]

> If your AWS service is properly regionalized, that’s the minimum amount of cellular architecture required

Amazon has had multi-region outages due to pushing bad configs, so it’s extremely difficult to believe whatever you are proposing solves that exact problem by relying on multi-regions.

Come to think of it, Cloudflare’s outage today is another good counterexample.

tptacek 2 hours ago | parent | prev [-]

Pretty sure he's making my point (or, rather, me his) there. (I'm never going to turn down an opportunity to nerd out about Cookism).

tptacek 2 hours ago | parent | prev [-]

Reframe this problem: instead of bot rules being propagated, it's the enrollment of a new customer or a service at an existing customer --- something that must happen at Cloudflare several times a second. Does it still make sense to you to think about that in terms of "pushing new configuration to prod"?

otterley 2 hours ago | parent [-]

Those aren't the facts before us. Also, CRUD operations relating to a specific customer or user tend not to cause the sort of widespread incidents we saw today.

tptacek 2 hours ago | parent [-]

They're not, they're a response to your claim that "state" and "configuration" are indifferentiable.

Scaevolus 3 hours ago | parent | prev | next [-]

Global configuration is useful for low response times to attacks, but you need to have very good ways to know when a global config push is bad and to be able to rollback quickly.

In this case, the older proxy's "fail-closed" categorization of bot activity was obviously better than the "fail-crash", but every global change needs to be carefully validated to have good characteristics here.

Having a mapping of which services are downstream of which other service configs and versions would make detecting global incidents much easier too, by making the causative threads of changes more apparent to the investigators.

3 hours ago | parent [-]
[deleted]
ants_everywhere 2 hours ago | parent | prev [-]

it's always a config push. people rollout code slowly but don't have the same mechanisms for configs. But configs are code, and this is a blind spot that causes an outsized percentage of these big outages.