| ▲ | jazzypants 2 hours ago | |||||||
> Objectives change; timeliness matters. The speed at which you deliver value is incredibly important, which is why it matters to measure your process. This assumes that shorter code is faster to write. To quote Blaise Pascal, "I would have written a shorter letter, but I did not have the time." > Can you deliver value without lines of code? No, but you can also depreciate value when you stuff a codebase full of bloated, bug-ridden code that no man or machine can hope to understand. | ||||||||
| ▲ | mcmcmc 2 hours ago | parent [-] | |||||||
You seem determined to misinterpret. I’m not talking about LOC as a measure of productivity. The ratio of LOC needing review to the capacity of reviewers (using how many LOC can be read/reviewed over the sampling period) is what’s being discussed. Agentic AI/vibe coding has caused that ratio to increase and shows a bottleneck in the SDLC. It’s a proxy metric, get over yourself. “All models are wrong, some are useful”. What’s not useful is constantly bitching about how there’s no way to measure your work outside of the binary “is it done” every time process efficiency is brought up. | ||||||||
| ||||||||