| ▲ | eastdakota 3 hours ago | |||||||
Because we initially thought it was an attack. And then when we figured it out we didn’t have a way to insert a good file into the queue. And then we needed to reboot processes on (a lot) of machines worldwide to get them to flush their bad files. | ||||||||
| ▲ | prawn 8 minutes ago | parent | next [-] | |||||||
Just asking out of curiosity, but roughly how many staff would've been involved in some way in sorting out the issue? Either outside regular hours or redirected from their planned work? | ||||||||
| ▲ | gucci-on-fleek 3 hours ago | parent | prev | next [-] | |||||||
Thanks for the explanation! This definitely reminds me of CrowdStrike outages last year: - A product depends on frequent configuration updates to defend against attackers. - A bad data file is pushed into production. - The system is unable to easily/automatically recover from bad data files. (The CrowdStrike outages were quite a bit worse though, since it took down the entire computer and remediation required manual intervention on thousands of desktops, whereas parts of Cloudflare were still usable throughout the outage and the issue was 100% resolved in a few hours) | ||||||||
| ▲ | tptacek 3 hours ago | parent | prev | next [-] | |||||||
Richard Cook #18 (and #10) strikes again! https://how.complexsystems.fail/#18 It'd be fun to read more about how you all procedurally respond to this (but maybe this is just a fixation of mine lately). Like are you tabletopping this scenario, are teams building out runbooks for how to quickly resolve this, what's the balancing test for "this needs a functional change to how our distributed systems work" vs. "instead of layering additional complexity on, we should just have a process for quickly and maybe even speculatively restoring this part of the system to a known good state in an outage". | ||||||||
| ▲ | hbbio an hour ago | parent | prev | next [-] | |||||||
Thx for the explanation! Side thought as we're working on 100% onchain systems (for digital assets security, different goals): Public chains (e.g. EVMs) can be a tamper‑evident gate that only promotes a new config artifact if (a) a delay or multi‑sig review has elapsed, and (b) a succinct proof shows the artifact satisfies safety invariants like ≤200 features, deduped, schema X, etc. That could have blocked propagation of the oversized file long before it reached the edge :) | ||||||||
| ▲ | philipwhiuk 40 minutes ago | parent | prev | next [-] | |||||||
Why was Warp in London disabled temporarily. No mention of that change was discussed in the RCA despite it being called out in an update. For London customers this made the impact more severe temporarily. | ||||||||
| ▲ | dbetteridge 2 hours ago | parent | prev | next [-] | |||||||
Question from a casual bystander, why not have a virtual/staging mini node that receives these feature file changes first and catches errors to veto full production push? Or you do have something like this but the specific db permission change in this context only failed in production | ||||||||
| ||||||||
| ▲ | tetec1 3 hours ago | parent | prev [-] | |||||||
Yeah, I can imagine that this insertion was some high-pressure job. | ||||||||