Remix.run Logo
gishh 5 hours ago

Most people around here are too busy evangelizing rust or some web framework.

Most people around here don’t have any reason to have strong opinions about safety-critical code.

Most people around here spend the majority of their time trying to make their company money via startup culture, the annals of async web programming, and how awful some type systems are in various languages.

Working on safety-critical code with formal verification is the most intense, exhausting, fascinating work I’ve ever done.

Most people don’t work a company that either needs or can afford a safety-critical toolchain that is sufficient for formal, certified verification.

The goal of formal verification and safety critical code is _not_ to eliminate undefined behavior, it is to fail safely. This subtle point seems to have been lost a long time ago with “*end” developers trying to sell ads, or whatever.

kccqzy 4 hours ago | parent | next [-]

I appreciate your insights about formal verification but they are irrelevant. Notice that GP was talking about security-critical and you substituted it for safety-critical. Your average web app can have security-critical issues but they probably won’t have safety-critical issues. Let’s say through a memory safety vulnerability your web app allowed anyone to run shell commands on your server; that’s a security-critical issue. But the compromise of your server won’t result in anyone being in danger, so it’s not a safety-critical issue.

gishh 4 hours ago | parent [-]

Safety-critical systems aren’t connected to a MAC address you can ping. I didn’t move the goalposts.

josephg 3 hours ago | parent | next [-]

Sure they are. Eg, 911 call centers. Flight control. These systems aren’t on the open internet, but they’re absolutely networked. Do they apply regular security patches? If they do, they open themselves up to new bugs. If not, there are known security vulnerabilities just waiting for someone to use to slip into their network and exploit.

And what makes you think buggy software only causes problems when hackers get in? Memory bugs cause memory corruption and crashes. I don’t want my pacemaker running somebody’s cowboy C++, even if the device is never connected to the internet.

gishh 2 hours ago | parent [-]

Ah. I was responding to:

> Your average web app can have security-critical issues but they probably won’t have safety-critical issues.

How many air-gapped systems have you worked on?

AlotOfReading an hour ago | parent | prev [-]

Individual past experiences aren't always representative of everything that's out there.

I've worked on safety critical systems with MAC addresses you can ping. Some of those systems were also air-gapped or partially isolated from the outside world. A rare few were even developed as safety critical.

AlotOfReading 2 hours ago | parent | prev [-]

    The goal of formal verification and safety critical code is _not_ to eliminate undefined behavior, it is to fail safely.
Software safety cases depend on being able to link the executable semantics of the code to your software safety requirements.

You don't inherently need to eliminate UB to define the executable semantics of your code, but in practice you do. You could do binary analysis of the final image instead. You wouldn't even need a qualified toolchain this way. The semantics generated would only be valid for that exact build, and validation is one of the most expensive/time-consuming parts of safety critical development.

Most people instead work at the source code level, and rely on qualified toolchains to translate defined code into binaries with equivalent semantics. Trying to define the executable semantics of source code inherently requires eliminating UB, because the kind of "unrestricted UB" we're talking about has no executable semantics, nor does any code containing it. Qualified toolchains (e.g. Compcert, Green Hills, GCC with solidsand, Diab) don't guarantee correct translation of code without defined semantics, and coding standards like MISRA also require eliminating it.

As a matter of actual practice, safety critical processes "optimistically ignore" some level of undefined behavior, but that's not because it's acceptable from a principled stance on UB.