| ▲ | xyzzy123 14 hours ago | |
It depends on the application but there are lots of situations where a proper test suite is 10x or more the development work of the feature. I've seen this most commonly with "heavy" integrations. A concrete example would be adding say saml+scim to a product; you can add a library and do a happy path test and call it a day. Maybe add a test against a captive idp in a container. But testing all the supported flows against each supported vendor becomes a major project in and of itself if you want to do it properly. The number of possible edge cases is extreme and automating deployment, updates and configuration of the peer products under test is a huge drag, especially if they are hostile to automation. | ||
| ▲ | vrighter 14 hours ago | parent [-] | |
Once, for a very very critical part of our product, apart from the usual tests, I ended up writing another implementation of the thing, completely separately from the original dev, before looking at his code. We then ran them side by side and ensured that all of their outputs matched perfectly. The "test implementation" ended up being more performant, and eventually the two implementations switched roles. | ||