| ▲ | Arifcodes 3 hours ago | ||||||||||||||||||||||
The issue isn't AI, it's effort asymmetry. Before LLMs, opening a bad PR still took enough effort that most people self-filtered. Now the cost of generating a plausible-looking PR is near zero, so the noise floor has gone way up. Maintainers need better tools, not just policies. A "contributor must show they've read the contributing guide" gate (like a small quiz or a required issue link) would filter out 90% of drive-by LLM PRs. The spam problem in email was solved with a mix of technical and social solutions, not by asking people to stop spamming. | |||||||||||||||||||||||
| ▲ | JumpCrisscross 3 hours ago | parent | next [-] | ||||||||||||||||||||||
> The issue isn't AI, it's effort asymmetry Effort asymmetry is inheret to AI's raison d'être. (One could argue that's true for most consumer-facing technology.) The problem is AI. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | lelanthran 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
> A "contributor must show they've read the contributing guide" gate (like a small quiz or a required issue link) would filter out 90% of drive-by LLM PRs. Having a no brown M&Ms rule will only work temporarily. The LLM can read the guidelines too, after all. Better might be to move to emailed PRs and ignore github completely. The friction is higher and email addresses are easier to detect and record as spammers than github accounts. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | nunez an hour ago | parent | prev [-] | ||||||||||||||||||||||
Nah; I could see any of the modern models blazing through that challenge. What might be better is an option that developers can enable which disables new PRs by API. This way, outside contributors can still create new PRs if they're willing to spend a few seconds doing it in the browser. | |||||||||||||||||||||||