| ▲ | sothatsit a day ago |
| Concerns about the wasting of maintainer’s time, onboarding, or copyright, are of great interest to me from a policy perspective. But I find some of the debate around the quality of AI contributions to be odd. Quality should always be the responsibility of the person submitting changes. Whether a person used LLMs should not be a large concern if someone is acting in good-faith. If they submitted bad code, having used AI is not a valid excuse. Policies restricting AI-use might hurt good contributors while bad contributors ignore the restrictions. That said, restrictions for non-quality reasons, like copyright concerns, might still make sense. |
|
| ▲ | qsera a day ago | parent | next [-] |
| > If they submitted bad code... The core issue is that it takes a large amount of effort to even assess this, because LLM generated code looks good superficially. It is said that static FP languages make it hard to implement something if you don't really understand what you are implementing. Dynamically typed languages makes it easier to implement something when you don't fully understand what you are implementing. LLMs takes this to another level when it enables one to implement something with zero understanding of what they are implementing. |
| |
| ▲ | sothatsit a day ago | parent [-] | | The people likely to submit low-effort contributions are also the people most likely to ignore policies restricting AI usage. The people following the policies are the most likely to use AI responsibly and not submit low-effort contributions. I’m more interested in how we might allow people to build trust so that reviewers can positively spend time on their contributions, whilst avoiding wasting reviewers time on drive-by contributors. This seems like a hard problem. | | |
| ▲ | dormento a day ago | parent | next [-] | | I wonder if the right call wouldn't be impose a LOC limit on contributions (sensibly chosen for the combination of language/framework/toolset). | | |
| ▲ | sothatsit a day ago | parent [-] | | I quite like this direction. Limit new contributors to small contributions, and then relax restrictions as more of their contributions are accepted. |
| |
| ▲ | qsera 10 hours ago | parent | prev | next [-] | | I think The best place where AI can help in software development is helping with reviews, not doing development. But AI marketing would not like to promote it, may because it is less dramatic and does not involve a paradigm shift or something... | |
| ▲ | mort96 20 hours ago | parent | prev [-] | | The people who write the most shitty AI code seem to be the proudest of their use of AI. |
|
|
|
| ▲ | alexey-pelykh 4 hours ago | parent | prev | next [-] |
| The distinction that matters is whether the contributor can defend their work in review, not what tool produced it. I maintain a 300-commit fork built with heavy AI assistance. The AI writes a lot of the code. I review every line and can explain every choice. The test: can they respond to feedback, explain why they chose this approach over the simpler one, iterate on edge cases? That works regardless of how the code was produced. Debian's problem isn't AI. It's distinguishing "used a tool well" from "dumped output." Code review already does this. Tighter process for new contributors (smaller patches, demonstrated understanding through review conversation) filters on engagement quality, not tool choice. |
|
| ▲ | veunes a day ago | parent | prev | next [-] |
| The real invariant is responsibility: if you submit a patch, you own it. You should understand it, be able to defend the design choices, and maintain it if needed |
| |
| ▲ | serial_dev a day ago | parent | next [-] | | Ownership and responsibility are useless when a YouTuber tells it to their million followers that GitHub contributions are valued by companies and this is how you can create a pull request with AI in three minutes, and you get hundred low value noise PRs opened by university students from the other side of the globe. It’s Hacktoberfest on steroids. | |
| ▲ | tdeck 11 hours ago | parent | prev | next [-] | | "You committed it, you own it" can't even be enforced effectively at large companies, given employee turnover and changes in team priorities and recorgs. It's hard to see how this could be done effectively in open source projects. Once the code is in there, end users will rely on it. Other code will rely on it. If the original author goes radio silent it still can't be ripped out. | |
| ▲ | pixl97 a day ago | parent | prev [-] | | Great for large patches, great way to kill very small but important patches. |
|
|
| ▲ | 2 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | IshKebab a day ago | parent | prev [-] |
| It should be the responsibility of the person submitting changes. The problem is AI apparently makes it easy for people to shirk that responsibility. |
| |
| ▲ | sothatsit a day ago | parent | next [-] | | Trusted contributors using LLMs do not cause this problem though. It is the larger volume of low-effort contributions causing this problem, and those contributors are the most likely to ignore the policies. Therefore, policies restricting AI-use on the basis of avoiding low-quality contributions are probably hurting more than they’re helping. | | |
| ▲ | IshKebab a day ago | parent [-] | | I'm not sure I agree. If you have a blanket "you must disclose how you use AI" policy it's socially very easy to say "can you disclose how you used AI", and then if they say Claude code wrote it, you can just ignore it, guilt-free. Without that policy it feels rude to ask, and rude to ignore in case they didn't use AI. | | |
| ▲ | sothatsit 21 hours ago | parent [-] | | I’d argue this social angle is not very nuanced or effective. Not all people who used Claude Code will be submitting low-effort patches, and bad-faith actors will just lie about their AI-use. For example, someone might have done a lot of investigation to find the root cause of an issue, followed by getting Claude Code to implement the fix, which they then tested. That has a good chance of being a good contribution. I think tackling this from the trust side is likely to be a better solution. One approach would be to only allow new contributors to make small patches. Once those are accepted, then allow them to make larger contributions. That would help with the real problem, which is higher volumes of low-effort contributions overwhelming maintainers. |
|
| |
| ▲ | qsera a day ago | parent | prev [-] | | > people to shirk that responsibility. Actually not shrink, but just transfer it to reviewers. | | |
|