| ▲ | rmunn 13 hours ago | ||||||||||||||||
I may have given a misleading impression. Each property test took milliseconds to run, and FsCheck defaults to generating 100 random inputs for each test. Running the whole test suite took 5-10 minutes depending on whether I ran the longer tests or skipped them (the tests that generated very large lists, then split and concatenated them several times, took longer than the rest of the test suite combined). What I was doing that ran overnight was the stress testing feature of the Expecto test runner (https://github.com/haf/expecto?tab=readme-ov-file#stress-tes...), where instead of running 100 tests for each property test you define, it keeps on generating random input and running tests for a fixed length of time. I would set it to run for 8 hours then go to bed. In the morning I would look at the millions (literally millions, usually between 2-3 million) of random tests that had been run, all of which were passing, and say "Yep, I probably don't have any bugs left". | |||||||||||||||||
| ▲ | dkarl 11 hours ago | parent [-] | ||||||||||||||||
That's pretty cool, and now I'm curious if there's something similar for ScalaCheck. My comment comes from my own experience, though, introducing Hypothesis and ScalaCheck into codebases and quickly causing noticeable increases in unit test times. I think the additional runtime for tests is undoubtedly worth it, but maybe not a good trade-off when people are used to running unit tests several times an hour as part of their development cycle. To avoid people saying, "Running four minutes of tests five times per hour is ruining my flow and productivity," I make sure they have a script or command to run a subset of basic, less comprehensive tests, or to only run the tests relevant to the changes they've made. | |||||||||||||||||
| |||||||||||||||||