▲ | puilp0502 2 days ago | ||||||||||||||||
Every time I encounter these kinds of policy, I can't help but wonder how these policies would be enforced: The people who are considerate enough to abide by these policies, are the ones who would have "cared" about the code qualities and stuff like that, so the policy is a moot point for these kinds of people. OTOH, the people who recklessly spam "contributions" generated from LLMs, by their very nature, would not respect these policies in very high likelihood. For me it's like telling bullies to don't bully. By the way, I'm in no way against these kinds of policy: I've seen what happened to curl, and I think it's fully in their rights to outright ban any usage of LLMs. I'm just concerned about the enforceability of these policies. | |||||||||||||||||
▲ | userbinator 2 days ago | parent | next [-] | ||||||||||||||||
I think it's a discouragement more than an enforcement --- a "we will know if you submit AI-generated code, so don't bother trying." Maybe those who do know how to use LLMs really well can submit code that they fully understand and can explain the reasoning of, in which case the point is moot. | |||||||||||||||||
▲ | joecool1029 2 days ago | parent | prev | next [-] | ||||||||||||||||
> I can't help but wonder how these policies would be enforced One of the parties that decided on Gentoo's policy effectively said the same thing. If I get what you're really asking... the reality is, there's no way for them to know if a LLM tool was used internally, it's honor system. But I mean enforcement is just ban the contributor if they become a problem. They've banned or otherwise restricted other ones for being disruptive or spamming low quality contributions in the past. It's worded the way it is because most of the parties understand this isn't going away and might get revisited eventually. At least one of them hardline opposes LLM contributions in any form and probably won't change their mind. | |||||||||||||||||
| |||||||||||||||||
▲ | cleartext412 9 hours ago | parent | prev | next [-] | ||||||||||||||||
Sometimes PR contains objective evidence, such as LLM responses left in comments, or even something like " Generated with [Claude Code](https://claude.ai/code)" in commit message (notable example: https://github.com/OpenCut-app/OpenCut/pull/479/commits). | |||||||||||||||||
▲ | h4ny 2 days ago | parent | prev | next [-] | ||||||||||||||||
You just stop accepting contributions from them? There is nothing inherently different about these policies that make them more or less difficult to enforce than other kinds of polices. | |||||||||||||||||
▲ | totallymike a day ago | parent | prev | next [-] | ||||||||||||||||
If nothing else, it gives maintainers a sign to point to when closing PRs with prejudice, and that's not nothing. Bad faith contributors will still likely complain when their PRs are closed, and having an obviously applicable policy to cite makes it harder for them to keep complaining without getting banned outright. | |||||||||||||||||
▲ | yifanl a day ago | parent | prev | next [-] | ||||||||||||||||
You enforce them by pointing out the policy and closing the issue/patch request whenever you're concerned about the quality of the submission. If it turns out to be incorrectly called out, well that sucks, but I submit that patches have been refused before LLMs came to be. | |||||||||||||||||
▲ | fuoqi 2 days ago | parent | prev | next [-] | ||||||||||||||||
It's often quite easy to distinguish LLM-generated low-effort slop and it's far easier to point to the established policy than to explain why the PR is a complete garbage. On Github it's even easier to detect by inspecting the author's contribution history (and if it's private it's an automatic red flag). Of course, if someone has used LLM during development as a helper tool and done the necessary work of properly reviewing and fixing the generated code, then it can be borderline impossible to detect, but such PRs are much less problematic. | |||||||||||||||||
▲ | WD-42 2 days ago | parent | prev | next [-] | ||||||||||||||||
If someone uses an LLM to help them write good code that is indistinguishable from human written code, you are right, it's not enforceable. And that's what most people that are using LLMs should be doing. Unfortunately sometimes it is possible to tell the difference between human and LLM generated code (slop). Policies like this just make it clear and easy to outright reject them. | |||||||||||||||||
▲ | CJefferson 2 days ago | parent | prev | next [-] | ||||||||||||||||
We do tell bullies not to bully, and then hopefully when they are caught, they are punished. It’s not a perfect system, but better than just ignoring bullying happens. | |||||||||||||||||
▲ | sensanaty 2 days ago | parent | prev | next [-] | ||||||||||||||||
To me the point is that I want to see effort from a person asking me to review their PR. If it's obvious LLM generated bullshit, I outright ignore it. If they put in the time and effort to mold the LLM output so that it's high quality and they actually understand what they're putting in the PR (meaning they probably replace 99% of the output), then good, that's the point | |||||||||||||||||
▲ | bgwalter a day ago | parent | prev | next [-] | ||||||||||||||||
You cannot prevent cheating with other policies like the Developer Certificate of Origin either. Yet no one brought up the potential cheating at the time these policies were discussed. Several projects have rejected "AI" policies using your argument even though those projects themselves have contributor agreements or similar. This inconsistency makes it likely that the cheating argument, when only used for "AI" contributions, is a pretext and these projects are forced to use or promote "AI" for a number of reasons. | |||||||||||||||||
▲ | mctt a day ago | parent | prev [-] | ||||||||||||||||
What happened to curl? The comment is referring to how the curl project is being overwhelmed by low-quality bug/vulnerability reports generated (or partially generated) by AI (“AI slop”), so much so that curl maintainers are now banning reporters who submit such reports and demanding disclosure, because these sloppy reports cost a lot of time and drain the team. [generated by ChatGPT] Source: https://news.ycombinator.com/item?id=45217858 |