| ▲ | jaredcwhite 9 hours ago |
| 100% test coverage, for most projects of modest size, is extremely bad advice. |
|
| ▲ | CuriouslyC 8 hours ago | parent | next [-] |
| Pre-agents, 100% agree. Now, it's not a bad idea, the cost to do it isn't terrible, though there's diminishing returns as you get >90-95%. |
| |
| ▲ | marcosdumay 7 hours ago | parent | next [-] | | LLMs don't make bad tests any less harmful. Nor they write good tests for the stuff people mostly can't write good tests for. | | |
| ▲ | zahlman 4 hours ago | parent [-] | | Okay, but is aiming for 100% coverage really why the bad tests are bad? |
| |
| ▲ | pca006132 7 hours ago | parent | prev [-] | | The problem is that it is natural to have code that is unreachable. Maybe you are trying to defend against potential cases that may be there in the future (e.g., things that are yet implemented), or algorithms written in a general way but are only used in a specific way. 100% test coverage requires removing these, and can hurt future development. | | |
| ▲ | sgk284 7 hours ago | parent [-] | | It doesn't require removing them if you think you'll need them. It just requires writing tests for those edge cases so you have confidence that the code will work correctly if/when those branches do eventually run. I don't think anyone wants production code paths that have never been tried, right? |
|
|
|
| ▲ | bdangubic 9 hours ago | parent | prev [-] |
| laziness? unprofessionalism? both? or something else? |
| |
| ▲ | spc476 7 hours ago | parent | next [-] | | You forgot difficult. How do you test a system call failure? How do you test a system call failure when the first N calls need to pass? Be careful how you answer, some answers technically fall into the "undefined behavior" category (if you are using C or C++). | | | |
| ▲ | rvz 9 hours ago | parent | prev [-] | | all of the above. |
|