| ▲ | Cloudflare outage on December 5, 2025(blog.cloudflare.com) |
| 209 points by meetpateltech 2 hours ago | 134 comments |
| |
|
| ▲ | flaminHotSpeedo 2 hours ago | parent | next [-] |
| What's the culture like at Cloudflare re: ops/deployment safety? They saw errors related to a deployment, and because it was related to a security issue instead of rolling it back they decided to make another deployment with global blast radius instead? Not only did they fail to apply the deployment safety 101 lesson of "when in doubt, roll back" but they also failed to assess the risk related to the same deployment system that caused their 11/18 outage. Pure speculation, but to me that sounds like there's more to the story, this sounds like the sort of cowboy decision a team makes when they've either already broken all the rules or weren't following them in the first place |
| |
| ▲ | dkyc an hour ago | parent | next [-] | | One thing to keep in mind when judging what's 'appropriate' is that Cloudflare was effectively responding to an ongoing security incident outside of their control (the React Server RCE vulnerability). Part of Cloudlfare's value proposition is being quick to react to such threats. That changes the equation a bit: any hour you wait longer to deploy, your customers are actively getting hacked through a known high-severity vulnerability. In this case it's not just a matter of 'hold back for another day to make sure it's done right', like when adding a new feature to a normal SaaS application. In Cloudflare's case moving slower also comes with a real cost. That isn't to say it didn't work out badly this time, just that the calculation is a bit different. | | |
| ▲ | flaminHotSpeedo 44 minutes ago | parent | next [-] | | To clarify, I'm not trying to imply that I definitely wouldn't have made the same decision, or that cowboy decisions aren't ever the right call. However, this preliminary report doesn't really justify the decision to use the same deployment system responsible for the 11/18 outage. Deployment safety should have been the focus of this report, not the technical details. My question that I want answered isn't "are there bugs in Cloudflare's systems" it's "has Cloudflare learned from it's recent mistakes to respond appropriately to events" | |
| ▲ | Already__Taken an hour ago | parent | prev | next [-] | | the cve isn't a zero day though how come cloudflare werent at the table for early disclosure? | | | |
| ▲ | udev4096 34 minutes ago | parent | prev [-] | | Clownflare did what it does best, mess up and break everything. It will keep happening again and again | | |
| ▲ | toomuchtodo 22 minutes ago | parent [-] | | Indeed, but it is what it is. Cloudflare comes out of my budget, and even with downtime, its better than not paying them. Do I want to deal with what Cloudflare offers? I do not, I have higher value work to focus on. I want to pay someone else to deal with this, and just like when cloud providers are down, it'll be back up eventually. Grab a coffee or beer and hang; we aren't savings lives, we're just building websites. This is not laziness or nihilism, but simply being rational and pragmatic. |
|
| |
| ▲ | liampulles an hour ago | parent | prev | next [-] | | Rollback is a reliable strategy when the rollback process is well understood. If a rollback process is not well known and well experienced, then it is a risk in itself. I'm not sure of the nature of the rollback process in this case, but leaning on ill-founded assumptions is a bad practice. I do agree that a global rollout is a problem. | |
| ▲ | lukeasrodgers an hour ago | parent | prev | next [-] | | Roll back is not always the right answer. I can’t speak to its appropriateness in this particular situation of course, but sometimes “roll forward” is the better solution. | | |
| ▲ | flaminHotSpeedo 32 minutes ago | parent | next [-] | | Like the other poster said, roll back should be the right answer the vast majority of the time. But it's also important to recognize that roll forward should be a replacement for the deployment you decided not to roll back, not a parallel deployment through another system. I won't say never, but a situation where the right answer to avoid a rollback (that it sounds like was technically fine to do, just undesirable from a security/business perspective) is a parallel deployment through a radioactive, global blast radius, near instantaneous deployment system that is under intense scrutiny after another recent outage should be about as probable as a bowl of petunias in orbit | |
| ▲ | echelon an hour ago | parent | prev [-] | | You want to build a world where roll back is 95% the right thing to do. So that it almost always works and you don't even have to think about it. During an incident, the incident lead should be able to say to your team's on call: "can you roll back? If so, roll back" and the oncall engineer should know if it's okay. By default it should be if you're writing code mindfully. Certain well-understood migrations are the only cases where roll back might not be acceptable. Always keep your services in "roll back able", "graceful fail", "fail open" state. This requires tremendous engineering consciousness across the entire org. Every team must be a diligent custodian of this. And even then, it will sometimes break down. Never make code changes you can't roll back from without reason and without informing the team. Service calls, data write formats, etc. I've been in the line of billion dollar transaction value services for most of my career. And unfortunately I've been in billion dollar outages. |
| |
| ▲ | this_user an hour ago | parent | prev | next [-] | | The question is perhaps what the shape and status of their tech stack is. Obviously, they are running at massive scale, and they have grown extremely aggressively over the years. What's more, especially over the last few years, they have been adding new product after new product. How much tech debt have they accumulated with that "move fast" approach that is now starting to rear its head? | | |
| ▲ | sandeepkd 9 minutes ago | parent [-] | | I think this is probably a bigger root cause and is going to show up in different ways in future. The mere act of adding new products to an existing architecture/system is bound to create knowledge silos around operations and tech debt. There is a good reason why big companies keep smart people on their payroll to just change couple of lines after a week of debate. |
| |
| ▲ | otterley an hour ago | parent | prev | next [-] | | From the post: “We have spoken directly with hundreds of customers following that incident and shared our plans to make changes to prevent single updates from causing widespread impact like this. We believe these changes would have helped prevent the impact of today’s incident but, unfortunately, we have not finished deploying them yet. “We know it is disappointing that this work has not been completed yet. It remains our first priority across the organization.” | |
| ▲ | nine_k an hour ago | parent | prev | next [-] | | > more to the story From a more tinfoil-wearing angle, it may not even be a regular deployment, given the idea of Cloudflare being "the largest MitM attack in history". ("Maybe not even by Cloudflare but by NSA", would say some conspiracy theorists, which is, of course, completely bonkers: NSA is supposed to employ engineers who never let such blunders blow their cover.) | |
| ▲ | deadbabe 2 hours ago | parent | prev | next [-] | | As usual, Cloudflare is the man in the arena. | | |
| ▲ | samrus an hour ago | parent [-] | | There are other men in the arena who arent tripping on their own feet | | |
| ▲ | usrnm an hour ago | parent [-] | | Like who? Which large tech company doesn't have outages? | | |
| ▲ | nish__ 30 minutes ago | parent | next [-] | | Google does pretty good. | |
| ▲ | k8sToGo an hour ago | parent | prev | next [-] | | It's not about outages. It's about the why. Hardware can fail. Bugs can happen. But to continue a roll out despite warning sings and without understanding the cause and impact is on another level. Especially if it is related to the same problem as last time. | | |
| ▲ | udev4096 32 minutes ago | parent [-] | | And yet, it's always clownflare breaking everything. Failures are inevitable, which is widely known, therefore we build resilience systems to overcome the inevitable | | |
| ▲ | deadbabe 5 minutes ago | parent [-] | | It is healthy for tech companies to have outages, as they will build experience in resolving them. Success breeds complacency. |
|
| |
| ▲ | k__ an hour ago | parent | prev [-] | | "tripping on their own feet" == "not rolling back" |
|
|
| |
| ▲ | rvz an hour ago | parent | prev | next [-] | | > Not only did they fail to apply the deployment safety 101 lesson of "when in doubt, roll back" but they also failed to assess the risk related to the same deployment system that caused their 11/18 outage. Also there seems to be insufficient testing before deployment with very junior level mistakes. > As soon as the change propagated to our network, code execution in our FL1 proxy reached a bug in our rules module which led to the following LUA exception: Where was the testing for this one? If ANY exception happened during the rules checking, the deployment should fail and rollback. Instead, they didn't assess that as a likely risk and pressed on with the deployment "fix". I guess those at Cloudflare are not learning anything from the previous disaster. | |
| ▲ | NoSalt an hour ago | parent | prev [-] | | Ooh ... I want to be on a cowboy decision making team!!! |
|
|
| ▲ | paradite 2 hours ago | parent | prev | next [-] |
| The deployment pattern from Cloudflare looks insane to me. I've worked at one of the top fintech firms, whenever we do a config change or deployment, we are supposed to have rollback plan ready and monitor key dashboards for 15-30 minutes. The dashboards need to be prepared beforehand on systems and key business metrics that would be affected by the deployment and reviewed by teammates. I've never seen a downtime longer than 1 minute while I was there, because you get a spike on the dashboard immediately when something goes wrong. For the entire system to be down for 10+ minutes due to a bad config change or deployment is just beyond me. |
| |
| ▲ | markus_zhang an hour ago | parent | next [-] | | My guess is that CF has so many external customers that they need to move fast and try not to break things. My hunch is that their culture always favors moving fast. As long as they are not breaking too many things, customers won't leave them. | | |
| ▲ | paradite an hour ago | parent [-] | | There is nothing wrong with moving fast and deploying fast. I'm more talking about how slow it was to detect the issue caused by the config change, and perform the rollback of the config change. It took 20 minutes. |
| |
| ▲ | theideaofcoffee an hour ago | parent | prev [-] | | Same, my time at a F100 ecommerce retailer showed me the same. Every change control board justification needed an explicit back-out/restoration plan with exact steps to be taken, what was being monitored to ensure that was being held to, contacts of prominent groups anticipated to have an effect, emergency numbers/rooms for quick conferences if in fact something did happen. The process was pretty tight, almost no revenue-affecting outages from what I can remember because it was such a collaborative effort (even though the board presentation seemed a bit spiky and confrontational at the time, everyone was working together). | | |
| ▲ | prdonahue an hour ago | parent [-] | | And you moved at a glacial pace compared to Cloudflare. There are tradeoffs. |
|
|
|
| ▲ | Scaevolus 2 hours ago | parent | prev | next [-] |
| > Disabling this was done using our global configuration system. This system does not use gradual rollouts but rather propagates changes within seconds to the entire network and is under review following the outage we recently experienced on November 18. > As soon as the change propagated to our network, code execution in our FL1 proxy reached a bug in our rules module which led to the following LUA exception: They really need to figure out a way to correlate global configuration changes to the errors they trigger as fast as possible. > as part of this rollout, we identified an increase in errors in one of our internal tools which we use to test and improve new WAF rules Warning signs like this are how you know that something might be wrong! |
| |
| ▲ | philipwhiuk an hour ago | parent [-] | | > Warning signs like this are how you know that something might be wrong! Yes, as they explain it's the rollback that was triggered due to seeing these errors that broke stuff. | | |
| ▲ | Scaevolus 38 minutes ago | parent | next [-] | | They saw errors and decided to do a second rollout to disable the component generating errors, causing a major outage. | |
| ▲ | 8cvor6j844qw_d6 37 minutes ago | parent | prev [-] | | Would be nice if the outage dashboards are directly linked to this instead of whatever they have now. |
|
|
|
| ▲ | jakub_g 28 minutes ago | parent | prev | next [-] |
| The interesting part: After rolling out a bad ruleset update, they tried a killswitch (rolled out immediately to 100%) which was a code path never executed before: > However, we have never before applied a killswitch to a rule with an action of “execute”. When the killswitch was applied, the code correctly skipped the evaluation of the execute action, and didn’t evaluate the sub-ruleset pointed to by it. However, an error was then encountered while processing the overall results of evaluating the ruleset > a straightforward error in the code, which had existed undetected for many years |
| |
| ▲ | 8cvor6j844qw_d6 25 minutes ago | parent [-] | | > have never before applied a killswitch to a rule with an action of “execute” One might think a company on the scale of Cloudflare would have a suite of comprehensive tests to cover various scenarios. | | |
| ▲ | hnthrowaway0328 16 minutes ago | parent [-] | | I kinda think most companies out there are like that. Moving fast is the motto I heard the most. They are probably OK with occasional breaks as long as customers don't mind. |
|
|
|
| ▲ | lionkor 43 minutes ago | parent | prev | next [-] |
| Cloudflare is now below 99.9% uptime, for anyone keeping track. I reckon my home PC is at least 99.9%. |
|
| ▲ | miyuru 2 hours ago | parent | prev | next [-] |
| Whats going on with cloudflare's software team? I have seen similar bugs in cloudflare API recently as well. There is an endpoint for a feature that is available only to enterprise users, but the check for whether the user is on an enterprise plan is done at the last step. |
|
| ▲ | cpncrunch 13 minutes ago | parent | prev | next [-] |
| I've noticed that in recent months, even apart from these outages, cloudflare has been contributing to a general degradation and shittification of the internet. I'm seeing a lot more "prove you're human", "checking to make sure you're human", and there is normally at the very least a delay of a few seconds before the site loads. I don't think this is really helping the site owners. I suspect it's mainly about AI extortion: https://blog.cloudflare.com/introducing-pay-per-crawl/ |
| |
| ▲ | james2doyle 2 minutes ago | parent | next [-] | | You call it extortion of the AI companies, but isn’t stealing/crawling/hammering a site to scrape their content to resell just as nefarious? I would say Cloudflare is giving these site owners an option to protect their content and as a byproduct, reduce their own costs of subsidizing their thieves. They can choose to turn off the crawl protection. If they aren't, that tells you that they want it, doesn’t it? | |
| ▲ | NooneAtAll3 5 minutes ago | parent | prev [-] | | it can't even spy on us silently, damn |
|
|
| ▲ | liampulles an hour ago | parent | prev | next [-] |
| The lesson presented by the last few big outages is that entropy is, in fact, inescapable. The comprehensibility of a system cannot keep up with its growing and aging complexity forever. The rate of unknown unknowns will increase. The good news is that a more decentralized internet with human brain scoped components is better for innovation, progress, and freedom anyway. |
| |
| ▲ | hnthrowaway0328 11 minutes ago | parent [-] | | I'm not sure how decentralization helps though. People in a bazzar are going to care even less about sharing shadow knowledge. Linux IMO succeeds not because of the bazaar but because of Linus. |
|
|
| ▲ | rachr an hour ago | parent | prev | next [-] |
| Time for Cloudflare to start using the BOFH excuse generator.
https://bofh.d00t.org/ |
|
| ▲ | uyzstvqs 24 minutes ago | parent | prev | next [-] |
| What I'm missing here is a test environment. Gradual or not; why are they deploying straight to prod? At Cloudflare's scale, there should be a dedicated room in Cloudflare HQ with a full isolated model-scale deployment of their entire system. All changes should go there first, with tests run for every possible scenario. Only after that do you use gradual deployment, with a big red oopsie button which immediately rolls the changes back. Languages with strong type systems won't save you, good procedure will. |
|
| ▲ | 8cvor6j844qw_d6 40 minutes ago | parent | prev | next [-] |
| Is there some underlying factors that resulted in the recent outages (e.g., new processes, layoffs, etc.) or just a series of pure coincidences? |
| |
| ▲ | Elucalidavah 26 minutes ago | parent | next [-] | | Sounds like their "FL1 -> FL2" transition is involved in both. | | |
| ▲ | Someone1234 4 minutes ago | parent [-] | | It was involved in the previous one, but not in this latest one. All FL2 did was prevent the outage being even wider spread than it was. None of this had anything to do with migration. |
| |
| ▲ | gernigg 18 minutes ago | parent | prev [-] | | It's all good saaaar don't think about it |
|
|
| ▲ | Bender 23 minutes ago | parent | prev | next [-] |
| Suggestion for Cloudflare: Create an early adopter option for free accounts. Benefit: Earliest uptake of new features and security patches. Drawback: Higher risk of outages. I think this should be possible since they already differentiate between free, pro and enterprise accounts. I do not know how the routing for that works but I bet they could do this. Think crowd-sourced beta testers. Also a perk for anything PCI audit or FEDRAMP naughty-word related. |
|
| ▲ | xnorswap 2 hours ago | parent | prev | next [-] |
| My understanding, paraphrased: "In order to gradually roll out one change, we had to globally push a different configuration change, which broke everything at once". But a more important takeaway: > This type of code error is prevented by languages with strong type systems |
| |
| ▲ | jsnell 2 hours ago | parent | next [-] | | That's a bizarre takeaway for them to suggest, when they had exactly the same kind of bug with Rust like three weeks ago. (In both cases they had code implicitly expecting results to be available. When the results weren't available, they terminated processing of the request with an exception-like mechanism. And then they had the upstream services fail closed, despite the failing requests being to optional sidecars rather than on the critical query path.) | | |
| ▲ | littlestymaar an hour ago | parent [-] | | In fairness, the previous bug (with the Rust unwrap) should never have happened: someone explicitly called the panicking function, the review didn't catch it and the CI didn't catch it. It required a significant organizational failure to happen. These happen but they ought to be rarer than your average bug (unless your organization is fundamentally malfunctioning, that is) | | |
| ▲ | greatgib an hour ago | parent [-] | | The issue would also not have happened, if someone did the right code, tests, and the review or CI caught it... |
|
| |
| ▲ | debugnik 2 hours ago | parent | prev | next [-] | | Prevented unless they assert the wrong invariant at runtime like they did last time. | |
| ▲ | skywhopper 2 hours ago | parent | prev [-] | | This is the exact same type of error that happened in their Rust code last time. Strong type systems don’t protect you from lazy programming. |
|
|
| ▲ | dreamcompiler an hour ago | parent | prev | next [-] |
| "Honey we can't go on that vacation after all. In fact we can't ever take a vacation period." "Why?" "I've just been transferred to the Cloudflare outage explanation department." |
|
| ▲ | gkoz 2 hours ago | parent | prev | next [-] |
| I sometimes feel we'd be better off without all the paternalistic kitchensink features. The solid, properly engineered features used intentionally aren't causing these outages. |
| |
| ▲ | ilkkao an hour ago | parent [-] | | Agreed, I don't really like Cloudflare trying to magically fix every web exploit there is in frameworks my site has never used. | | |
|
|
| ▲ | egorfine an hour ago | parent | prev | next [-] |
| > provides customers with protection against malicious payloads, allowing them to be detected and blocked. To do this, Cloudflare’s proxy buffers HTTP request body content in memory for analysis. I have a mixed feeling about this. On the other hand, I absolutely don't want a CDN to look inside my payloads and decide what's good for me or. Today it's protection, tomorrow it's censorship. At the same time this is exactly what CloudFlare is good for - to protect sites from malicious requests. |
| |
| ▲ | udev4096 28 minutes ago | parent [-] | | We need a decentralized ddos mitigation network based on incentives. Donate X amount of bandwidth, get Y amount of protection from other peers. Yes, we gotta do TLS inspection on every end for effective L7 mitigation but at least filtering can be done without decrypting any packets |
|
|
| ▲ | nish__ 27 minutes ago | parent | prev | next [-] |
| Is it crazy to anyone else that they deploy every 5 minutes? And that it's not just config updates, but actual code changes with this "execute" action. |
|
| ▲ | mmmlinux 13 minutes ago | parent | prev | next [-] |
| Messing around on a Friday? Brave. |
|
| ▲ | hrimfaxi 2 hours ago | parent | prev | next [-] |
| Having their changes fully propagate within 1 minute is pretty fantastic. |
| |
| ▲ | denysvitali an hour ago | parent | next [-] | | This is most likely a strong requisite for such a big scale deployment if DDOS protection and detection - which explains their architectural choices (ClickHouse & co) and the need of a super low latency config changes. Since attackers might rotate IPs more frequently than once per minute, this effectively means that the whole fleet of servers should be able to quickly react depending on the decisions done centrally. | |
| ▲ | chatmasta an hour ago | parent | prev [-] | | The coolest part of Cloudflare’s architecture is that every server is the same… which presumably makes deployment a straightforward task. |
|
|
| ▲ | rany_ an hour ago | parent | prev | next [-] |
| > As part of our ongoing work to protect customers using React against a critical vulnerability, CVE-2025-55182, we started rolling out an increase to our buffer size to 1MB, the default limit allowed by Next.js applications. Why would increasing the buffer size help with that security vulnerability? Is it just a performance optimization? |
| |
| ▲ | redslazer an hour ago | parent | next [-] | | If the request data is larger than the limit it doesn’t get processed by the Cloudflare system. By increasing buffer size they process (and therefore protect) more requests. | |
| ▲ | boxed an hour ago | parent | prev [-] | | I think the buffer size is the limit on what they check for malicious data, so the old 128k would mean it would be trivial to circumvent by just having 128k ok data and then put the exploit after. |
|
|
| ▲ | theoldgreybeard 5 minutes ago | parent | prev | next [-] |
| This is total amateur shit. Completely unacceptable for something as critical as Cloudflare. |
|
| ▲ | _pdp_ an hour ago | parent | prev | next [-] |
| So no static compiler checks and apparently no fuzzers used to ensure these rules work as intended? |
| |
|
| ▲ | denysvitali 2 hours ago | parent | prev | next [-] |
| Ironically, this time around the issue was in the proxy they're going to phase out (and replace with the Rust one). I truly believe they're really going to make resilience their #1 priority now, and acknowledging the release process errors that they didn't acknowledge for a while (according to other HN comments) is the first step towards this. HugOps.
Although bad for reputation, I think these incidents will help them shape (and prioritize!) resilience efforts more than ever. At the same time, I can't think of a company more transparent than CloudFlare when it comes to these kind of things. I also understand the urgency behind this change: CloudFlare acted (too) fast to mitigate the React vulnerability and this is the result. Say what you want, but I'd prefer to trust CloudFlare who admits and act upon their fuckups, rather than trying to cover them up or downplaying them like some other major cloud providers. @eastdakota: ignore the negative comments here, transparency is a very good strategy and this article shows a good plan to avoid further problems |
| |
| ▲ | iLoveOncall 38 minutes ago | parent | next [-] | | > I truly believe they're really going to make resilience their #1 priority now I hope that was their #1 priority from the very start given the services they sell... Anyway, people always tend to overthink about those black-swan events. Yes, 2 happened in a quick succession, but what is the average frequency overall? Insignificant. | | |
| ▲ | denysvitali 18 minutes ago | parent [-] | | I think they have to strike a balance between being extremely fast (reacting to vulnerabilities and DDOS attacks) while still being resilient. I don't think it's an easy situation |
| |
| ▲ | fidotron an hour ago | parent | prev | next [-] | | > HugOps This childish nonsense needs to end. Ops are heavily rewarded because they're supposed to be responsible. If they're not then the associated rewards for it need to stop as well. | | |
| ▲ | denysvitali an hour ago | parent | next [-] | | I have never seen an Ops team being rewarded for avoiding incidents (focusing in tech debt reduction), but instead they get the opposite - blamed when things go wrong. I think it's human nature (it's hard to realize something is going well until it breaks), but still has a very negative psychological effect. I can barely imagine the stress the team is going through right now. | | |
| ▲ | fidotron an hour ago | parent [-] | | > I have never seen an Ops team being rewarded for avoiding incidents That's why their salaries are so high. | | |
| ▲ | denysvitali an hour ago | parent | next [-] | | Depending on the tech debt, the ops team might just be in "survival mode" and not have the time to fix every single issue. In this particular case, they seem to be doing two things:
- Phasing out the old proxy (Lua based) which is replaced by FL2 (Rust based, the one that caused the previous incident)
- Reacting to an actively exploited vulnerability in React by deploying WAF rules - and they're doing them in a relatively careful way (test rules) to avoid fuckups, which caused this unknown state, which triggered the issue | | |
| ▲ | fidotron an hour ago | parent [-] | | They deliberately ignored an internal tool that started erroring out at the given deployment and rolled it out anyway without further investigation. That's not deserving of sympathy. |
| |
| ▲ | esseph 34 minutes ago | parent | prev [-] | | Ops salaries are high??? Where?!?! | | |
|
| |
| ▲ | esseph 35 minutes ago | parent | prev [-] | | Ops has never been "rewarded" at any org I've ever been at or heard about, including physical infra companies. |
| |
| ▲ | trashburger an hour ago | parent | prev | next [-] | | I would very much like for him not to ignore the negativity, given that, you know, they are breaking the entire fucking Internet every time something like this happens. | | |
| ▲ | denysvitali an hour ago | parent [-] | | This is the kind of comment I wish he would ignore. You can be angry - but that doesn't help anyone.
They fucked up, yes, they admitted it and they provided plans on how to address that. I don't think they do these things on purpose. Of course given their good market penetration they end up disrupting a lot of customers - and they should focus on slow rollouts - but I also believe that in a DDOS protection system (or WAF) you don't want or have the luxury to wait for days until your rule is applied. | | |
| ▲ | nish__ 22 minutes ago | parent | next [-] | | Maybe not on purpose but there's such a thing as negligence. | |
| ▲ | beanjuiceII an hour ago | parent | prev [-] | | I hope he doesn't ignore it, the internet has been forgiving enough toward cloudflares string of failures..its getting pretty old, and creates a ton of choas. I work with life saving devices, being impacted in any way in data monitoring has a huge impact in many ways. "sorry ma'am we can't give your child t1d readings on your follow app because our provider decided to break everything in the pursuit of some react bug." has a great ring to it | | |
| ▲ | esseph 28 minutes ago | parent [-] | | Half your medical devices are probably opening up data leakage to China. https://www.csoonline.com/article/3814810/backdoor-in-chines... Most hospital and healthcare IT teams are extremely under funded, undertrained, overworked, and the software, configurations and platforms are normally not the most resilient things. I have a friend at one in the North East right now going through a hell of a security breach for multiple months now and I'm flabbergasted no one is dead yet. When it comes to tech, I get the impression most organizations are not very "healthy" in the durability of systems. |
|
|
| |
| ▲ | da_grift_shift an hour ago | parent | prev [-] | | [ Removed by Reddit ] | | |
| ▲ | denysvitali an hour ago | parent [-] | | Wow. The three comments below parent really show how toxic HN has become. | | |
| ▲ | beanjuiceII an hour ago | parent [-] | | being angry about something doesn't make it toxic, people have a right to be upset | | |
| ▲ | denysvitali an hour ago | parent [-] | | The comment, before the edit, was what I would consider toxic. No wonder it has been edited. It's fine to be upset, and especially rightfully so after the second outage in less than 30 days, but this doesn't justify toxicity. |
|
|
|
|
|
| ▲ | snafeau 2 hours ago | parent | prev | next [-] |
| A lot of these kind of bugs feel like they could be caught be a simple review bot like Greptile... I wonder if Cloudlare uses an equivalent tool internally? |
| |
| ▲ | nkmnz an hour ago | parent | next [-] | | What makes greptile a better choice compared to claude code or codex, in your opinion? | |
| ▲ | nish__ 21 minutes ago | parent | prev [-] | | Any bot that runs an AI model should not be called "simple". |
|
|
| ▲ | antiloper 2 hours ago | parent | prev | next [-] |
| Make faster websites: > we started rolling out an increase to our buffer size to 1MB, the default limit allowed by Next.js applications. Why is the Next.js limit 1 MB? It's not enough for uploading user generated content (photographs, scanned invoices), but a 1 MB request body for even multiple JSON API calls is ridiculous. There frameworks need to at least provide some pushback to unoptimized development, even if it's just a lower default request body limit. Otherwise all web applications will become as slow as the MS office suite or reddit. |
| |
| ▲ | ramon156 an hour ago | parent | next [-] | | The update was to update it to 3MB (paid 10MB) | |
| ▲ | AmazingTurtle an hour ago | parent | prev [-] | | a) They serialize tons of data into requests
b) Headers. Mostly cookies. They are a thing. They are being abused all over the world by newbies. |
|
|
| ▲ | blibble 28 minutes ago | parent | prev | next [-] |
| amateur level stuff again |
|
| ▲ | nish__ 31 minutes ago | parent | prev | next [-] |
| No love lost, no love found. |
|
| ▲ | fidotron 2 hours ago | parent | prev | next [-] |
| > This change was being rolled out using our gradual deployment system, and, as part of this rollout, we identified an increase in errors in one of our internal tools which we use to test and improve new WAF rules. As this was an internal tool, and the fix being rolled out was a security improvement, we decided to disable the tool for the time being as it was not required to serve or protect customer traffic. Come on. This PM raises more questions than it answers, such as why exactly China would have been immune. |
| |
|
| ▲ | lapcat 2 hours ago | parent | prev | next [-] |
| > This is a straightforward error in the code, which had existed undetected for many years. This type of code error is prevented by languages with strong type systems. In our replacement for this code in our new FL2 proxy, which is written in Rust, the error did not occur. Cloudflare deployed code that was literally never tested, not even once, neither manually nor by unit test, otherwise the straightforward error would have been detected immediately, and their implied solution seems to be not testing their code when written, or even adding 100% code coverage after the fact, but rather relying on a programming language to bail them out and cover up their failure to test. |
| |
| ▲ | JohnMakin 18 minutes ago | parent [-] | | Large scale infrastructure changes are often by nature completely untestable. The system is too large, there are too many moving parts to replicate with any kind of sane testing, so often, you do find out in prod, which is why robust and fast rollback procedures are usually desirable and implemented. | | |
| ▲ | lapcat 3 minutes ago | parent [-] | | > Large scale infrastructure changes are often by nature completely untestable. You're changing the subject here and shifting focus from the specific to the vague. The two postmortems after the recent major Cloudflare outages both listed straightforward errors in source code that could have been tested and detected. Theoretical outages could theoretically have other causes, but these two specific outages had specific causes that we know. > which is why robust and fast rollback procedures are usually desirable and implemented. Yes, nobody is arguing against that. It's a red herring with regard to my point about source code testing. |
|
|
|
| ▲ | iLoveOncall an hour ago | parent | prev | next [-] |
| The most surprising from this article is that CloudFlare handles only around 85M TPS. |
| |
| ▲ | blibble 29 minutes ago | parent [-] | | it can't really be that small, can it? that's maybe half a rack of load | | |
| ▲ | nish__ 19 minutes ago | parent [-] | | Given the number of lua scripts they seem to be running, it has to take more than half a rack. |
|
|
|
| ▲ | kachapopopow 2 hours ago | parent | prev | next [-] |
| why does this seem oddly familiar (fail-closed logic) |
|
| ▲ | rvz an hour ago | parent | prev | next [-] |
| > Instead, it was triggered by changes being made to our body parsing logic while attempting to detect and mitigate an industry-wide vulnerability disclosed this week in React Server Components. Doesn't Cloudflare rigorously test their changes before deployment to make sure that this does not happen again? This better not have been used to cover for the fact that they are using AI to fix issues like this one. Better not be any presence of vibe coders or AI agents being used to be touching such critical pieces of infrastructure at all and I expected Cloudflare to learn from the previous outage very quickly. But this is quite a pattern but might need to consider putting the unreliability next to GitHub (which goes down every week). |
|
| ▲ | jgalt212 an hour ago | parent | prev | next [-] |
| I do kind of like who they are blaming React for this. |
|
| ▲ | kosolam 16 minutes ago | parent | prev | next [-] |
| Some nonsense again. The level of negligence there is astounding. This is frightening because this entity is daily exposed to a large portion of our personal data which goes over the wire. As well as business data. It’s just a matter of time before a disaster is going to occur. Some regulatory body must take control in their hands right now. |
|
| ▲ | da_grift_shift 2 hours ago | parent | prev | next [-] |
| It's not an outage, it's an Availability Incident™. https://blog.cloudflare.com/5-december-2025-outage/#what-abo... |
| |
| ▲ | perching_aix 21 minutes ago | parent [-] | | You jest, but recently I also felt compelled to stop using the word (planned) outage where I work, because it legitimately creates confusion around the (expected) character of impact. Outage is the nuclear wasteland situation, which given modern architectural choices, is rather challenging to manifest. To avoid it is face-saving, but also more correct. |
|
|
| ▲ | websiteapi 2 hours ago | parent | prev | next [-] |
| i wonder why they cannot partially rollout. like the other outage they have to do a global rollout. |
| |
| ▲ | usrnm 2 hours ago | parent | next [-] | | I really don't see how it would've helped. In go or Rust you'd just get a panic, which is in no way different. | |
| ▲ | denysvitali an hour ago | parent | prev [-] | | The article mentions that this Lua-based proxy is the old generation one, which is going to be replaced by the Rust based one (FL2) and that didn't fail on this scenario. So, if anything, their efforts towards a typed language were justified. They just didn't manage to migrate everything in time before this incident - which is ironically a good thing since this incident was cause mostly by a rushed change in response to an actively exploited vulnerability. | | |
| ▲ | websiteapi an hour ago | parent [-] | | yes, but as the article states why are they doing global fast rollouts? | | |
| ▲ | denysvitali an hour ago | parent [-] | | I think (would love to be corrected) that this is the nature of their service. They probably push multiple config changes per minute to mitigate DDOS attacks. For sure the proxies have a local list of IPs that, for a period of time, are blacklisted. For DDOS protection you can't really rely on multiple-hours rollouts. |
|
|
|
|
| ▲ | jpeter 2 hours ago | parent | prev | next [-] |
| Unwrap() strikes again |
| |
| ▲ | dap 2 hours ago | parent | next [-] | | I guess you’re being facetious but for those who didn’t click through: > This type of code error is prevented by languages with strong type systems. In our replacement for this code in our new FL2 proxy, which is written in Rust, the error did not occur. | | |
| ▲ | skywhopper an hour ago | parent [-] | | That bit may be true, but the underlying error of a null reference that caused a panic was exactly the same in both incidents. |
| |
| ▲ | throwawaymaths 2 hours ago | parent | prev [-] | | this time in lua. cloudflare can't catch a break | | |
| ▲ | RoyTyrell 2 hours ago | parent | next [-] | | Or they're not thoroughly testing changes before pushing them out. As I've seen some others say, CloudFlare at this point should be considered critical infrastructure. Maybe not like power but dang close. | | |
| ▲ | esseph 24 minutes ago | parent [-] | | My power goes out every Wednesday around noon and normally if the weather is bad. In a major US metro. I hope cloudflare is far more resilient than local power. |
| |
| ▲ | gcau 2 hours ago | parent | prev | next [-] | | The 'rewrite it in lua' crowd are oddly silent now. | | | |
| ▲ | rvz an hour ago | parent | prev [-] | | Time to use boring languages such as Java and Go. |
|
|
|
| ▲ | barbazoo 2 hours ago | parent | prev [-] |
| > Customers that did not have the configuration above applied were not impacted. Customer traffic served by our China network was also not impacted. Interesting. |
| |
| ▲ | flaminHotSpeedo 2 hours ago | parent [-] | | They kinda buried the lede there, 28% failure rate for 100% of customers isn't the same as 100% failure rate for 28% of customers |
|