▲ | ozgrakkurt 5 days ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
What do you think about leaning on fuzz testing and deriving unit tests from bugs found by fuzzing? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | JonChesterfield 5 days ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
You end up with a pile of unit tests called things like "regression, don't crash when rhs null" or "regression, terminate on this" which seems fine. The "did it change?" genre of characterisation/snapshot tests can be created very effectively using a fuzzer, but should probably be kept separate from the unit tests checking for specific behaviour, and partially regenerated when deliberately changing behaviour. Llvm has a bunch of tests generated mechanically from whatever the implementation does and checked in. I do not rate these - they're thousands of lines long, glow red in code review and I'm pretty sure don't get read by anyone in practice - but because they exist more focused tests do not. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | manmal 5 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
What kind of bugs do you find this way, besides missing sanitization? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|