| ▲ | bonesss 2 days ago | |
Given a 25% hypothetical boost: there are categories of errors vibe testing vibed code will bring in, we know humans suck at critical reading. On the support timeline of an Enterprise product that’s gonna lead to one or more true issues. At what point is an ‘extra’ 25% coding overhead worth it to ensure a sane human reasonably concerned about criminal consequences for impropriety read all code when making it, and every change around it? To prevent public embarrassment that can and will chase off customers? To have someone to fire and sue if need be? [Anecdotally, the inflection point was finding tests updated to short circuit through mildly obfuscated code (introduced after several reviews). Paired with a working system developed with TDD, that mistake only becomes obvious when the system stops working but the tests don’t. I wrote it, I ran the agents, I read it, I approved it, but was looking for code quality not intentional sabotage/trickery… lesson learned.] From a team lead perspective in an Enterprise space, using 25% more time on coding to save insane amounts of aggressive and easy to flubb review and categories of errors sounds like a smart play. CYA up front, take the pain up front. | ||
| ▲ | bluGill a day ago | parent [-] | |
Not that you are wrong, but you don't seem to understand my point. I spend less than 25% of my time writing code. I also do code review, various story/architecture planning, testing, bug triage, required training, and other management/people activities; these take up more than 75% of my time. Even if AI could do vibe code as well as me infinitely fast it still wouldn't be a 75% improvement. | ||