| ▲ | bilalq 3 hours ago | |
This question is surprising to me, because I consider AI code review the single most valuable aspect of AI-assisted software development today. It's ahead of line/next-edit tab completion, agentic task completion, etc. AI code review does not replace human review. But AI reviewers will often notice little things that a human may miss. Sometimes the things they flag are false positives, but it's still worth checking in on them. If even one logical error or edge case gets caught by an AI reviewer that would've otherwise made it to production with just human review, it's a win. Some AI reviewers will also factor in context of related files not visible in the diff. Humans can do this, but it's time consuming, and many don't. AI reviews are also a great place to put "lint" like rules that would be complicated to express in standard linting tools like Eslint. We currently run 3-4 AI reviewers on our PRs. The biggest problem I run into is outdated knowledge. We've had AI reviewers leave comments based on limitations of DynamoDB or whatever that haven't been true for the last year or two. And of course it feels tedious when 3 bots all leave similar comments on the same line, but even that is useful as reinforcement of a signal. | ||