▲ | AlotOfReading 7 hours ago | |
I don't think it's usually meaningful to discuss the idea of a "correct answer" without doing the traditional numerical analysis stuff. Realistically, we don't need to formalize most programs to that degree and provide precise error bounds either. To give a practical example, I work on autonomous vehicles. My employer runs a lot of simulations against the vehicle code to make sure it meets reasonable quality standards. The hardware the code will be running on in the field isn't necessarily available in large quantities in the datacenter though. And realistically I'm not going to be able to apply numerical analysis to thousands of functions and millions of test cases to determine what the "correct" answer is in all cases. Instead, I can privilege whatever the current result is as "correct enough" and ensure the vehicle hardware always gets the same results as the datacenter hardware, then leverage the many individual developers merging changes to review their own simulation results knowing they'll apply on-road. This is a pretty universal need for most software projects. If I'm a frontend dev, I want to know that my website will render the same on my computer as the client's. If I'm a game engine dev, I want to ensure that replays don't diverge from the in-game experience. If I'm a VFX programmer, I want to ensure that the results generated on the artist's workstation looks the same as what comes out of the render pipeline at the end. Etc. All of these applications can still benefit from things like stability, but the benefits are orthogonal to reproducibility. |