| ▲ | nextaccountic 13 hours ago | |
> The rumors we hear have to do with projects inundated with more pull requests that they can review, the pull requests are obviously low quality, and the contributors' motives are selfish. There's a way to handle this: put an automatic AI review of every PR from new contributors. Fight fire with fire. (Actually, this was the solution for spam even before LLMs. See "A plan for SPAM" by Paul Graham. Basically, if you have a cheap but accurate filter (specially, a filter you can train for your own patterns), it should be enabled as a first line of defense. Anything the filter doesn't catch and the user had to manually mark as spam should become data to improve the filter) Moreover, if the review detects LLM-generated content but the user didn't disclose it, maybe there should be consequences | ||