▲ | dcminter 7 days ago | ||||||||||||||||||||||
How about you fix the flakey tests? The tests I'd delete are the ones that just test that the code is written in a particular way instead of testing the expected behaviour of thr code. | |||||||||||||||||||||||
▲ | Shank 7 days ago | parent | next [-] | ||||||||||||||||||||||
> How about you fix the flakey tests? Often times a flakey test is not flakey because it was well-written and something else strange is failing. Often times the test reveals something about the system that is somewhat non-deterministic, but not non-deterministic in a detrimental way. When you have multiple levels of abstraction and parallelization and interdependent behavior, fixing a single test becomes a time consuming process that is difficult to work with (because it's flakey, you can't always replicate the failure). If a test fails in CI and the traceback is unclear, many people will re-run once and let it continue to flake. Obvious flakes around time and other dependencies are much easier to spot and fix, so they are. It's only the weird ones that lead to pain and regret. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | silversmith 7 days ago | parent | prev | next [-] | ||||||||||||||||||||||
Came here to comment this. Most of the flakey tests are badly written, some warn you about bugs you don't yet understand. Couple years ago I helped to bring a project back on track. They had a notoriously flakey part of test suite, turned out to be caused by a race condition. And a very puzzling case of occasional data corruption - also, turns out, caused by the same race condition. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | XorNot 7 days ago | parent | prev [-] | ||||||||||||||||||||||
This: anything which starts doing stuff like "called API N times" is utterly worthless (looking at you whole bunch of AWS API mock tests...) |