| ▲ | conwy 8 hours ago | |
FWIW, I've been using AI, but instead of "max # of lines/commits", I'm optimising for "min # of pr comments/iterations/bugs". My goal is to end up with less/simpler code and more/bigger impact. The real goal is business value, and ultimately human value. Optimise for that, using AI where it fits. Along those lines, some techniques I've been dabbling in: 1. Getting multiple agents to implement a requirement from scratch, them combining the best ideas from all of them with my own informed approach. 2. Gathering documentation (requirements, background info, glossaries, etc), targeting an Agent at it, and asking carefully selected questions for which the answers are likely useful. 3. Getting agents to review my code, abstracting review comments I agree with to a re-usable checklist of general guidelines, then using those guidelines to inform the agents in subsequent code reviews. Over time I hope this will make the code reviews increasingly well fitted to the code base and nature of the problems I work on. | ||
| ▲ | kaashif 3 hours ago | parent [-] | |
The Goodhart's law effect there seems obvious - rather than code getting better, you might just become less rigorous in your reviews and stop commenting as much. You may not even realize your standards are dropping. | ||