Remix.run Logo
dzonga 3 hours ago

> thread fl2_worker_thread panicked: called Result::unwrap() on an Err value

I don't use Rust, but a lot of Rust people say if it compiles it runs.

Well Rust won't save you from the usual programming mistake. Not blaming anyone at cloudflare here. I love Cloudflare and the awesome tools they put out.

end of day - let's pick languages | tech because of what we love to do. if you love Rust - pick it all day. I actually wanna try it for industrial robot stuff or small controllers etc.

there's no bad language - just occassional hiccups from us users who use those tools.

jryio 3 hours ago | parent | next [-]

You misunderstand what Rust’s guarantees are. Rust has never promised to solve or protect programmers from logical or poor programming. In fact, no such language can do that, not even Haskell.

Unwrapping is a very powerful and important assertion to make in Rust whereby the programmer explicitly states that the value within will not be an error, otherwise panic. This is a contract between the author and the runtime. As you mentioned, this is a human failure, not a language failure.

Pause for a moment and think about what a C++ implementation of a globally distributed network ingress proxy service would look like - and how many memory vulnerabilities there would be… I shudder at the thought… (n.b. nginx)

This is the classic example of when something fails, the failure cause over indexes on - while under indexing on the quadrillions of memory accesses that went off without a single hitch thanks to the borrow checker.

I postulate that whatever the cost in millions or hundreds of millions of dollars by this Cloudflare outage, it has paid for more than by the savings of safe memory access.

See: https://en.wikipedia.org/wiki/Survivorship_bias

jsheard an hour ago | parent [-]

> Pause for a moment and think about what a C++ implementation of a globally distributed network ingress proxy service would look like - and how many memory vulnerabilities there would be… I shudder at the thought… (n.b. nginx)

No need to speculate, Cloudflares ingress proxy was previously a customized nginx and it did fail spectacularly due to memory unsafety. That's probably a large part of why they rewrote it in Rust.

https://en.wikipedia.org/wiki/Cloudbleed

lmm 2 hours ago | parent | prev | next [-]

> Rust won't save you from the usual programming mistake.

Disagree. Rust is at least giving you an "are you sure?" moment here. Calling unwrap() should be a red flag, something that a code reviewer asks you to explain; you can have a linter forbid it entirely if you like.

No language will prevent you from writing broken code if you're determined to do so, and no language is impossible to write correct code in if you make a superhuman effort. But most of life happens in the middle, and tools like Rust make a huge difference to how often a small mistake snowballs into a big one.

SchemaLoad 2 hours ago | parent [-]

Yep, unwrap() and unsafe are escape hatches that need very good justifications. It's fine for casual scripts where you don't care if it crashes. For serious production software they should be either banned, or require immense scrutiny.

tptacek 3 hours ago | parent | prev | next [-]

What people are saying is that idiomatic prod rust doesn't use unwrap/expect (both of which panic on the "exceptional" arm of the value) --- instead you "match" on the value and kick the can up a layer on the call chain.

olivia-banks 3 hours ago | parent [-]

What happens to it up the callstack? Say they propagated it up the stack with `?`. It has to get handled somewhere. If you don't introduce any logic to handle the duplicate databases, what else are you going to do when the types don't match up besides `unwrap`ing, or maybe emitting a slightly better error message? You could maybe ignore that module's error for that request, but if it was a service more critical than bot mitigation you'd still have the same symptom of getting 500'd.

evil-olive 2 hours ago | parent | next [-]

> What happens to it up the callstack?

as they say in the post, these files get generated every 5 minutes and rolled out across their fleet.

so in this case, the thing farther up the callstack is a "watch for updated files and ingest them" component.

that component, when it receives the error, can simply continue using the existing file it loaded 5 minutes earlier.

and then it can increment a Prometheus metric (or similar) representing "count of errors from attempting to load the definition file". that metric should be zero in normal conditions, so it's easy to write an alert rule to notify the appropriate team that the definitions are broken in some way.

that's not a complete solution - in particular it doesn't necessarily solve the problem of needing to scale up the fleet, because freshly-started instances won't have a "previous good" definition file loaded. but it does allow for the existing instances to fail gracefully into a degraded state.

in my experience, on a large enough system, "this could never happen, so if it does it's fine to just crash" is almost always better served by a metric for "count of how many times a thing that could never happen has happened" and a corresponding "that should happen zero times" alert rule.

acdha 2 hours ago | parent | prev | next [-]

The way I’ve seen this on a few older systems was that they always keep the previous configuration around so it can switch back. The logic is something like this:

1. At startup, load the last known good config.

2. When signaled, load the new config.

3. When that passes validation, update the last-known-good pointer to the new version.

That way something like this makes the crash recoverable on the theory that stale config is better than the service staying down. One variant also recorded the last tried config version so it wouldn’t even attempt to parse the latest one until it was changed again.

For Cloudflare, it’d be tempting to have step #3 be after 5 minutes or so to catch stuff which crashes soon but not instantly.

__turbobrew__ 2 hours ago | parent | prev | next [-]

Presumably you kick up the error to a level that says “if parsing new config fails, keep the old config”

tptacek 3 hours ago | parent | prev [-]

Yeah, see, that's what I mean.

metaltyphoon 3 hours ago | parent | prev | next [-]

> Well Rust won't save you from the usual programming mistake

This is not a Rust problem. Someone consciously chose to NOT handle an error, possibly thinking "this will never happen". Then someone else conconciouly reviewed (I hope so) a PR with an unwrap() and let it slide.

Klonoar 2 hours ago | parent | prev | next [-]

> I don't use Rust, but a lot of Rust people say if it compiles it runs.

Do you grok what the issue was with the unwrap, though...?

Idiomatic Rust code does not use that. The fact that it's allowed in a codebase says more about the engineering practices of that particular project/module/whatever. Whoever put the `unwrap` call there had to contend with the notion that it could panic and they still chose to do it.

It's a programmer error, but Rust at least forces you to recognize "okay, I'm going to be an idiot here". There is real value in that.

dzonga 3 hours ago | parent | prev [-]

other people might say - why use unsafe rust - but we don't know the conditions of what the original code shipped under. why the pr was approved.

could have been tight deadline, managerial pressure or just the occasional slip up.