| ▲ | andrewjf 3 hours ago | |
There's two kinds of reviews in my experience: 1. Does it work? Then ship it. This is great for early on, high-velocity where the goal is to get something working in the wild. AI and AI proponents love this option. It's easy to spot obvious problems, but very unlikely to lead to feedback on structural changes to abstractions and architecture to increase overall _long-term_ velocity. 2. We assume this works, but is it "correct"? This is where long-term code maintainability is created. The quality and effort put into a review like this is obviously far more involved than option 1. People working long term on a code base love this option. We've been biased towards #1 for a long time, but I feel like we dont have enough people capable of doing #2. | ||
| ▲ | motoroco an hour ago | parent [-] | |
I've worked with some people who only seem to care about 2. as in, they don't try the feature in any way, but come back with comments about "this isn't tested enough" even though it has higher coverage than the codebase's average, and refuse to approve even though they'll never meaningfully review the content. it does seem to be mostly just theater in my experience | ||