▲ | 9rx 6 days ago | |||||||||||||||||||||||||||||||||||||||||||
> I find it very helpful to write unit tests for specific implementations of things (like a specific sort, to make sure it works correctly with the various edge cases). Sorting mightn't be the greatest example as sorting could quite reasonably be the entire program (i.e. a library). But if you needed some kind of custom sort function to serve features within a greater application, you are already going to know that your sort function works correctly by virtue of the greater application working correctly. Testing the sort function in isolation is ultimately pointless. As before, there may be some benefit in writing code to run that sort function in isolation during development to help pinpoint what edge cases need to be considered, but there isn't any real value in keeping that around after development is done. The edge cases you discovered need to be moved up in the abstraction to the greater program anyway. | ||||||||||||||||||||||||||||||||||||||||||||
▲ | MrJohz 6 days ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||
It's very often easier to trigger edge cases when just testing a smaller part of a system then when testing the whole system. Moreover, you'll probably write more useful tests if you write them knowing what's going on in the code. In these cases, colocating the tests with the thing they're meant to be testing is really useful. I find the problem with trying to move the tests up a level of abstraction is that eventually the code you're writing is probably going to change, and the tests that were useful for development the first time round will probably continue to be useful the second time round as well. So keeping them in place, even if they're really implementation-specific, is useful for as long as that implementation exists. (Of course, if the implementation changes for one with different edge cases, then you should probably get rid of the tests that were only useful for the old implementation.) Importantly, this only works if the boundaries of the unit are fairly well-defined. If you're implementing a whole new sort algorithm, that's probably the case. But if I was just writing a function that compares two operands, that could be passed to a built-in sort function, I might look to see if there's a better level of abstraction to test at, because I can imagine the use of that compare function being something that changes a lot during refactorings. | ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||
▲ | RHSeeger 4 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||
> But if you needed some kind of custom sort function to serve features within a greater application, you are already going to know that your sort function works correctly by virtue of the greater application working correctly. Testing the sort function in isolation is ultimately pointless. It is entirely possible for a sort function to be just one component of the functionality of the larger code base. Sort in specific is something I've written unit tests for. > As before, there may be some benefit in writing code to run that sort function in isolation during development to help pinpoint what edge cases need to be considered, but there isn't any real value in keeping that around after development is done. Those edge cases (and normal cases) continue to exist after the code is written. And if you find a new edge case later and need to change the code, then having the previous unit tests in place gives a certain amount of confidence that your changes (for the new case) aren't breaking anything. Generally, the only time I _remove_ unit tests is if I'm changing to a new implementation; when the method being tested no longer exists. |