| ▲ | regularfry 3 hours ago | |||||||
So review the code. Our rule is that if your name is on the PR, you own the code; someone else will review it and expect you to be able to justify its contents. And we don't accept AI commits. What this means in workflow terms is that the bottleneck has moved, from writing the code to reviewing it. That's forward progress! But the disparity can be jarring when you have multiple thousands of lines of code generated every day and people are used to a review cycle based on tens or hundreds. Some people try to make the argument that we can accept standards of code from AI that we wouldn't accept from a human, because it's the AI that's going to have to maintain it and make changes. I don't accept that: whether you're human or not it's always possible to produce write-only code, and even if the position is "if we get into difficulty we'll just have the agent rewrite it" that doesn't stop you getting into a tarpit in the first place. While we still have a need to understand how the systems we produce work, we need humans to be able to make changes and vouch for their behaviour, and that means producing code that follows our standards. | ||||||||
| ▲ | tcldr 2 hours ago | parent [-] | |||||||
Totally agree. If I don’t understand the code as if I’d written it myself, then I haven’t reviewed it properly. And during that review I’m often trimming and moving things around to simplify and clarify as much as possible. This helps both me and the next agent. Using these tools has made me realise how much of the work we (or I) do is editing: simplifying the codebase to the clearest boundaries, focusing down the APIs of internal modules, actual testing (not just unit tests), managing emerging complexity with constant refactoring. Currently, I think an LLM struggles with the subtlety and taste aspects of many of these tasks, but I’m not confident enough to say that this won’t change. | ||||||||
| ||||||||