▲ | AlotOfReading a month ago | |
I don't think it's usually meaningful to discuss the idea of a "correct answer" without doing the traditional numerical analysis stuff. Realistically, we don't need to formalize most programs to that degree and provide precise error bounds either. To give a practical example, I work on autonomous vehicles. My employer runs a lot of simulations against the vehicle code to make sure it meets reasonable quality standards. The hardware the code will be running on in the field isn't necessarily available in large quantities in the datacenter though. And realistically I'm not going to be able to apply numerical analysis to thousands of functions and millions of test cases to determine what the "correct" answer is in all cases. Instead, I can privilege whatever the current result is as "correct enough" and ensure the vehicle hardware always gets the same results as the datacenter hardware, then leverage the many individual developers merging changes to review their own simulation results knowing they'll apply on-road. This is a pretty universal need for most software projects. If I'm a frontend dev, I want to know that my website will render the same on my computer as the client's. If I'm a game engine dev, I want to ensure that replays don't diverge from the in-game experience. If I'm a VFX programmer, I want to ensure that the results generated on the artist's workstation looks the same as what comes out of the render pipeline at the end. Etc. All of these applications can still benefit from things like stability, but the benefits are orthogonal to reproducibility. | ||
▲ | aragilar a month ago | parent [-] | |
I agree, if you know your requirements are such that you don't care about the correct answer, but any good enough consistent answer will do (e.g. the fast inverse square root), then doing the full numerical analysis is not worth your while (though I'd still do some back of the envelope estimates just to make sure the answer is in the "good enough" category and not in the "wtf" category). The issue is when there's libraries involved (especially more generic libraries are involved), typically they don't document what assumptions they are making, so unless you are very clear about the limitations of your code, people will use the library inappropriately (and I've seen some really bad implementations, even from people claiming that their code can be used for more traditional numerical work). To me it's the same as whether your default random number generator is a CSPRNG or just a PRNG, and generally it's safer for all involved if it's the former. The latter should exist, just with lots of warnings and guidance. |