| ▲ | eschaton 2 hours ago | |
If the submitter of a PR needs to take full responsibility for the code within, then the code within cannot be LLM-generated because—depending on whether you consider it an original work by the LLM or a resurrected copy of its training data—it’s either not subject to copyright or under someone else’s copyright. (At least for any coding LLM that isn’t trained entirely on one company’s own code and also offered by that company. That sort of LLM might be able to make the regurgitation argument work for them.) Thus any project requiring “full responsibility” by submitters may as well just ban submitters from using LLM-based tooling. That’s the tack I’ve taken for my projects, and a number of large projects have taken that stance too. (Before someone trots out “Technical enforcement of this is impossible!” be assured that such rules are not negated by a lack of technical enforcement; after all, there’s also no way to technically enforce that you didn’t copy someone else’s code and paste it in. But by thinking a lack of technical enforcement matters, you’re outing yourself as someone who will happily violate rules if they think they won’t get caught.) | ||