|
| ▲ | imtringued 8 months ago | parent | next [-] |
| I don't know why you are trolling us. Tests aren't supposed to check every input. The entire point of classic testing is to define static execution traces by hand. That is obviously meant to only cover a finite number of execution traces. Even 100% test coverage doesn't give you proof over all possible execution traces. Tests prove that the software "works", but they do not prove the absence of bugs (which inherently requires you to specify what a "bug" is, hence the need for a specification). Not only is static verification more powerful, there is also a massive usability difference. You define your pre and post conditions in each function (also known as specification or design contract) and the tool will automatically check that you do not violate these conditions. It solves a very different problem from classic unit or integration tests. |
|
| ▲ | sangel 8 months ago | parent | prev [-] |
| Obviously not. Suppose the input to my function is a 64-bit integer. My test cannot possibly try every possible 64-bit integer. It would take years for such a test to finish. This is why tools like formal verification and symbolic analyses can help you establish that for all possible integers X, your function does the right thing (for some definition of “right”). You get this assurance without having to actually enumerate all X. |
| |
| ▲ | stephencanon 8 months ago | parent [-] | | Indeed. Exhaustively testing a 64b input space requires about 600 machine years for each nanosecond that the function under test takes to run. So, _very_ short-running critical functions are testable on a cluster, but not efficiently testable, and if a function is simple enough to be testable, then it's usually easier to prove that it's correct instead. |
|