| ▲ | bjackman 3 hours ago | |
Shouldn't this go without saying though? At some point someone has to review the code and they see a human name as the sender of the PR. If that person sees the work is bad, isn't it just completely unambiguous that the person whose name is on the PR is responsible for that? If someone responded "but this is AI generated" I would feel justified just responding "it doesn't matter" and passing the review back again. And the rest (what's in the LLVM policy) should also fall out pretty naturally from this? If someone sends me code for review, and have the feeling they haven't read it themselves, I'll say "I'm not reviewing this and I won't review any more of your PRs unless you promise you reviewed them yourself first". The fact that people seem to need to establish these things as an explicit policy is a little concerning to me. (Not that it's a bad idea at all. Just worried that there was a need). | ||
| ▲ | lexicality 2 hours ago | parent [-] | |
You would think it's common sense but I've received PRs that the author didn't understand and when questioned told me that the AI knows more about X than they do so they trust its judgement. A terrifying number of people seem to think that the damn thing is magic and infallible. | ||