| ▲ | danbruc 7 hours ago | |||||||
How effective is property based testing in practice? I would assume it has no trouble uncovering things like missing null checks or an inverted condition because you can cover edge cases like null, -1, 0, 1, 2^n - 1 with relatively few test cases and exhaustively test booleans. But beyond that, if I have a handful of integers, dates, or strings, then the state space is just enormous and it seems all but impossible to me that blindly trying random inputs will ever find any interesting input. If I have a condition like (state == "disallowed") or (limit == 4096) when it should have been 4095, what are the odds that a random input will ever pass this condition and test the code behind it? Microsoft had a remotely similar tool named Pex [1] but instead of randomly generating inputs, it instrumented the code to enable executing the code also symbolically and then used their Z3 theorem proofer to systematically find inputs to make all encountered conditions either true or false and with that incrementally explore all possible execution paths. If I remember correctly, it then generated a unit test for each discovered input with the corresponding output and you could then judge if the output is what you expected. [1] https://www.microsoft.com/en-us/research/publication/pex-whi... | ||||||||
| ▲ | IanCal 2 hours ago | parent | next [-] | |||||||
In practice I’ve found that property based testing has a very high ratio of value per effort of test written. Ui tests like: * if there is one or more items on the page one has focus * if there is more than one then hitting tab changes focus * if there is at least one, focusing on element x, hitting tab n times and then shift tab n times puts me back on the original element * if there are n elements, n>0, hitting tab n times visits n unique elements Are pretty clear and yet cover a remarkable range of issues. I had these for a ui library, which came with the start of “given a ui build with arbitrary calls to the api, those things remain true” Now it’s rare it’d catch very specific edge cases, but it was hard to write something wrong accidentally and still pass the tests. They actually found a bug in the specification which was inconsistent. I think they often can be easier to write than specific tests and clearer to read because they say what you actually are testing (a generic property, but you had to write a few explicit examples). What you could add though is code coverage. If you don’t go through your extremely specific branch that’s a sign there may be a bug hiding there. | ||||||||
| ▲ | spooneybarger 6 hours ago | parent | prev | next [-] | |||||||
An important step with property based testing and similar techniques is writing your own generators for your domain objects. I have used to it to incredible effect for many years in projects. I work at Antithesis now so you can take that with a grain of salt, but for me, everything changed for me over a decade ago when I started applying PBT techniques broadly and widely. I have found so many bugs that I wouldn't have otherwise found until production. | ||||||||
| ▲ | kqr 5 hours ago | parent | prev | next [-] | |||||||
"Exhaustively covering the search space" or "hitting specific edge cases" is the wrong way to think about property tests, in my experience. I find them most valuable as insanity checks, i.e. they can verify that basic invariants hold under conditions even I wouldn't think of testing manually. I'd check for empty strings, short strings, long strings, strings without spaces, strings with spaces, strings with weird characters, etc. But I might not think of testing with a string that's only spaces. The generator will. | ||||||||
| ▲ | kwillets 4 hours ago | parent | prev | next [-] | |||||||
One of the founders of Antithesis gave a talk about this problem last week; diversity in test cases is definitely an issue they're trying to tackle. The example he gave was Spanner tests not filling its cache due to jittering near zero under random inputs. Not doing that appears to be a company goal. https://github.com/papers-we-love/san-francisco/blob/master/... | ||||||||
| ||||||||
| ▲ | skybrian 7 hours ago | parent | prev | next [-] | |||||||
One thing you can find pretty quickly with just basic fuzzing on strings is Unicode-related bugs. | ||||||||
| ▲ | Mr_RxBabu 2 hours ago | parent | prev [-] | |||||||
[dead] | ||||||||