▲ | majormajor 10 days ago | |||||||
LLMs are pretty damn useful for generating tests, getting rid of a lot of tedium, but yeah, it's the same as human-written tests: if you don't check that your test doesn't work when it shouldn't (not the same thing as just writing a second test for that case - both those tests need to fail if you intentionally screw with their separate fixtures), then you shouldn't have too much confidence in your test. | ||||||||
▲ | marcosdumay 10 days ago | parent [-] | |||||||
If LLMs can generate a test for you, it's because it's a test that you shouldn't need to write. They can't test what is really important, at all. Some development stacks are extremely underpowered for code verification, so they do patch the design issue. Just like some stacks are underpowered for abstraction and need patching by code generation. Both of those solve an immediate problem, in a haphazard and error-prone way, by adding burden on maintenance and code evolution linearly to how much you use it. And worse, if you rely too much on them they will lead your software architecture and make that burden superlinear. | ||||||||
|