Remix.run Logo
babarock 4 days ago

You're not wrong, however the issue is that it's not always easy to detect if a PR includes proof that the change works. It requires that the reviewer interrupts what they're doing, switch context completely and look at the PR.

If you consider that reviewer bandwidth is very limited in most projects AND that the volume of low-effort-AI-assisted PR has grown incredibly over the past year, now we have a spam problem.

Some of my engineers refuse to review a patch if they detect that it's AI-assisted. They're wrong, but I understand their pain.

wiml 4 days ago | parent [-]

I don't think we're talking about merely "AI-assisted" PRs here. We're talking about PRs where the submitter has not read the code, doesn't understand it, and can't be bothered to describe what they did and why.

As a reviewer with limited bandwidth, I really don't see why I should spend any effort on those.

atomicnumber3 4 days ago | parent [-]

"We're talking about PRs where the submitter has not read the code, doesn't understand it, and can't be bothered to describe what they did and why."

IME, "AI" PRs are categorically that kind of PR. I find, and others around me in my org have agreed, that if you actually do all that you describe, the actual net time savings of AI are often (for a mid-level dev or above) either net 0 or negative.

I personally have used the phrase "baptized the AI out of it" describing my own PRs... Where I may have initially used AI to generate a bunch of the code, looked at it and went "huh neat that actually looks pretty right, this is almost done." Then I generate unit tests. Then I fix the unit tests to not be shit. Then i find bugs in the AI-generated code. Then upon pondering the code a bit, or maybe while fixing the bugs, I find the abstractions it created are clunky, so I refactor it a bit... and by the time I'm done there's not a lot of AI left in the PR, it's all me.