| |
| ▲ | RHSeeger 7 days ago | parent | next [-] | | I've described this before on occasion; I consider there to be a wide variety of tests. - Unit test = my code works - Functional test = my design works - Integration test = my code is using your 3rd party stuff correctly (databases, etc) - Factory Acceptance Test = my system works - Site Acceptance Test = your code sucks, this totally isn't what I asked for!?! Then there's more "concern oriented" groupings, like "regression tests", which could fall into any number of the above. That being said, there's a pretty wide set of opinions on the topic, and that doesn't really seem to change over time. > these kinds of tests are rarely ever worth writing I strongly disagree. I find it very helpful to write unit tests for specific implementations of things (like a specific sort, to make sure it works correctly with the various edge cases). Do they get discarded if you completely change the implementation? Sure. But that doesn't detract from the fact that they help make sure the current implementation works the way I say it does. | | |
| ▲ | 9rx 7 days ago | parent [-] | | > I find it very helpful to write unit tests for specific implementations of things (like a specific sort, to make sure it works correctly with the various edge cases). Sorting mightn't be the greatest example as sorting could quite reasonably be the entire program (i.e. a library). But if you needed some kind of custom sort function to serve features within a greater application, you are already going to know that your sort function works correctly by virtue of the greater application working correctly. Testing the sort function in isolation is ultimately pointless. As before, there may be some benefit in writing code to run that sort function in isolation during development to help pinpoint what edge cases need to be considered, but there isn't any real value in keeping that around after development is done. The edge cases you discovered need to be moved up in the abstraction to the greater program anyway. | | |
| ▲ | MrJohz 6 days ago | parent | next [-] | | It's very often easier to trigger edge cases when just testing a smaller part of a system then when testing the whole system. Moreover, you'll probably write more useful tests if you write them knowing what's going on in the code. In these cases, colocating the tests with the thing they're meant to be testing is really useful. I find the problem with trying to move the tests up a level of abstraction is that eventually the code you're writing is probably going to change, and the tests that were useful for development the first time round will probably continue to be useful the second time round as well. So keeping them in place, even if they're really implementation-specific, is useful for as long as that implementation exists. (Of course, if the implementation changes for one with different edge cases, then you should probably get rid of the tests that were only useful for the old implementation.) Importantly, this only works if the boundaries of the unit are fairly well-defined. If you're implementing a whole new sort algorithm, that's probably the case. But if I was just writing a function that compares two operands, that could be passed to a built-in sort function, I might look to see if there's a better level of abstraction to test at, because I can imagine the use of that compare function being something that changes a lot during refactorings. | | |
| ▲ | 9rx 6 days ago | parent [-] | | > eventually the code you're writing is probably going to change Ideally your units/integrations will never change. If they do change, that means the users of your code will face breakage and that's not good citizenry. Life is messy and sometimes you have little choice, but such changes should be as rare as possible. What is actually likely to change is the little helper functions you create to support the units, like said bespoke sort function. This is where testing can quickly make code fragile and is ultimately unnecessary. If the sort function is more useful than just a helper then you will move it out into its own library and, like before, the sort function will become the entire program and thus the full integration. | | |
| ▲ | MrJohz 6 days ago | parent [-] | | The interface ideally doesn't change, but the implementation probably will. And most of the units you're writing are probably internal-facing, which means that even if the interface does change, fixing that is just an internal refactoring change - with types and a good IDE, it's often just a couple of key presses away. I think this is what you're saying about moving useful units out into their own library. I agree, and I think it sounds like we'd draw the testing boundaries in similar places, but I don't think it's necessary to move these sorts of units into separate libraries for them to be isolated modules that can be usefully tested. The sort function is one of the edge cases where how I'd test it would probably depend a lot on the context, but in theory a generic sort function has a very standard interface that I wouldn't expect to change much, if at all. So I'd be quite happy treating it as a unit in its own right and writing a bunch of tests for it. But if it's something really implementation-specific that depends on the exact structure of the thing it's sorting, then it's probably better tested in context. But I'm quite willing to write tests for little helper functions that I'm sure will be quite stable. | | |
| ▲ | 9rx 6 days ago | parent [-] | | > The interface ideally doesn't change The whole of the interface is the unit, as Beck originally defined it. As it is the integration point. Hence why there is no difference between them. > And most of the units you're writing are probably internal-facing No. As before, it is a mistake to test internal functions. They are just an implementation detail. I understand that some have taken unit test to mean this, but I posit that as it is foolish to do it, there is no need to talk about it, allowing unit test to refer to its original and much more sensible definition. It only serves to confuse people into writing useless, brittle tests. > So I'd be quite happy treating it as a unit in its own right Right, and, likewise, you'd put it in its own package in its own right so that it is available to all sort cases you have. Thus, it is really its own program — and thus would have its own tests. | | |
| ▲ | MrJohz 6 days ago | parent [-] | | > Right, and, likewise, you'd put it in its own package in its own right so that it is available to all sort cases you have. Thus, it is really its own program — and thus would have its own tests. Sure, yeah, I think we're saying the same thing. A unit is a chunk of code that can act as its own program or library - it has an interface that will remain fairly fixed, and an implementation that could change over time. (Or, a unit is the interface that contains this chunk of code - I don't think the difference between these two definitions is so important here.) You could pull it out into its own library, or you can keep it as a module/file/class/function in a larger piece of software, but it is a self-contained unit. I think the important thing that I was trying to get across earlier, though, is that this unit can contain other units. At the most maximal scale, the entire application is a single unit made up of multiple sub-units. This is why I think a definition of unit/integration test that is based on whether a unit integrates other units doesn't really make much sense, because it doesn't actually change how you test the code. You still want quick, isolated tests, you still want to test the interface and not the internals (although you should be guided by the internals), and you still want to avoid mocking. So distinguishing between unit tests and integration tests in this way isn't particularly useful. | | |
| ▲ | 9rx 6 days ago | parent [-] | | > and you still want to avoid mocking. Assuming by mock you mean an alternate implementation (e.g. an in-memory database repository) that relieves dependence on a service that is outside of immediate control, nah. There is no reason to avoid that. That's just an implementation detail and, as before, your tests shouldn't be bothered by implementation details. And since you can run your 'mock' against the same test suite as the 'real thing', you know that it fulfills the same contract as the 'real thing'. Mocks in that sense are also useful outside of testing. If you mean something more like what is more commonly known as a stub, still no. This is essential for injecting failure states. You don't want to have to actually crash your hard drive to test your code under a hard drive crash condition. Testing failure cases are the most important tests you will write, so you will definitely be using these in all but the simplest programs. |
|
|
|
|
| |
| ▲ | RHSeeger 4 days ago | parent | prev [-] | | > But if you needed some kind of custom sort function to serve features within a greater application, you are already going to know that your sort function works correctly by virtue of the greater application working correctly. Testing the sort function in isolation is ultimately pointless. It is entirely possible for a sort function to be just one component of the functionality of the larger code base. Sort in specific is something I've written unit tests for. > As before, there may be some benefit in writing code to run that sort function in isolation during development to help pinpoint what edge cases need to be considered, but there isn't any real value in keeping that around after development is done. Those edge cases (and normal cases) continue to exist after the code is written. And if you find a new edge case later and need to change the code, then having the previous unit tests in place gives a certain amount of confidence that your changes (for the new case) aren't breaking anything. Generally, the only time I _remove_ unit tests is if I'm changing to a new implementation; when the method being tested no longer exists. |
|
| |
| ▲ | MrJohz 6 days ago | parent | prev | next [-] | | I find if you figure out the right unit boundaries, and find a good way of testing the code, you can often keep the tests around long-term, and they'll be very stable. Even when you update the code you're testing, if the tests are well-written, updating the tests is often just a case of running a find-and-replace job. That said, I think it takes a real knack to figure out the right sort of tests, and it sometimes takes me a couple of attempts to get it right. In that case, being willing to delete or completely rewrite tests that just aren't being useful is important! | |
| ▲ | mrugge 7 days ago | parent | prev [-] | | In test-driven development, fast unit tests are a must-have. Integration tests are too slow. If you are not doing test-driven development, can go heavier into integration tests. I find the developer experience is not as fun without good unit tests, and even if velocity metrics are the same, that factor alone is a good reason to focus on writing more fast unit tests. | | |
| ▲ | MrJohz 6 days ago | parent [-] | | In general, fast tests are a must-have, but I find that means figuring out how to write fast integration tests as well so that they can also be run as part of a TDD-like cycle. In my experience, integration tests can generally be written to be very quick, but maybe my definition of an integration test is different from yours? For me, heavy tests implies end-to-end tests, because at that point you're interacting with the whole system including potentially a browser, and that's just going to be slow whichever way you look at it. But just accessing a database, or parsing and sending http requests doesn't have to be particularly slow, at least not compared to the speed at which I develop. I'd expect to be able to run hundreds of those sorts of tests in less than a second, which is fast enough for me. | | |
| ▲ | mrugge 6 days ago | parent [-] | | I inherited a django project which has mostly 'unit' tests that flex the ORM and the db, so they are really integration tests and are painfully slow. There is some important logic that happens in the ORM layer and that needs to be tested. At some point I want to find the time to mock the database so that they can be faster, but in some cases I worry about missing important interactions. Domain is highly specialized so not very easy to just know how to untangle the mess. | | |
| ▲ | 9rx 6 days ago | parent [-] | | > I worry about missing important interactions. If you are concerned that the ORM won't behave as it claims to, you can write tests targeted at it directly. You can then run the same tests against your mock implementation to show that it conforms to the same contract. But an ORM of any decent quality will already be well tested and shouldn't do unexpected things, so perhaps the worry is for not? |
|
|
|
|