| ▲ | mkozlows 10 hours ago | |||||||
I like this. "Best practices" are always contingent on the particular constellation of technology out there; with tools that make it super-easy to write code, I can absolutely see 100% coverage paying off in a way that doesn't for human-written code -- it maximizes what LLMs are good at (cranking out code) while giving them easy targets to aim for with little judgement. (A thing I think is under-explored is how much LLMs change where the value of tests are. Back in the artisan hand-crafted code days, unit tests were mostly useful as scaffolding: Almost all the value I got from them was during the writing of the code. If I'd deleted the unit tests before merging, I'd've gotten 90% of the value out of them. Whereas now, the AI doesn't necessarily need unit tests as scaffolding as much as I do, _but_ having them put in there makes future agentic interactions safer, because they act as reified context.) | ||||||||
| ▲ | Waterluvian 9 hours ago | parent | next [-] | |||||||
It might depend on the lifecycle of your code. The tests I have for systems that keep evolving while being production critical over a decade are invaluable. I cannot imagine touching a thing without the tests. Many of which reference a ticket they prove remains fixed: a sometimes painfully learned lesson. | ||||||||
| ||||||||
| ▲ | johnnyfived 4 hours ago | parent | prev [-] | |||||||
I've said this before here, but "best practices" in code indeed is very typical even with different implementations and architectures. You can ask a LLM to write you the best possible code for a scenario and likely your implementation wouldn't differ much. Writing, art, creative output, that's nothing at all like code, which puts the software industry in a more particular spot than anything else in automation. | ||||||||