| ▲ | himata4113 11 hours ago | |
I think the problem is that AI can generate 1-2k lines of junk and dress it up in a PR. But take it as someone who regulary maxes two x20 accounts: Once you get a workflow going especially cases when you want to satisfy a fixed amount of tests there is no going back. The days of AI hardcoding things to past tests are going to be behind us as they gain the ability to generalize not just in their knowledge but problem solving instead, especially if you know how to hit AI where it hurts. This is where it becomes relevant: upstreaming hardware support, if someone wants to add / fix a bug and they successfully test it and it works there has to be some kind of middle ground where the PR should be justified and worth the effort of a review, but the person submitting the PR possibly has no idea about what they're talking about and just has to trust AI. I do not have an answer to that except for limiting the size and scope of such PRs. Possibly require previous work being acknoledged (hand made) rather than AI generated to validate that you at least know what the PR is about and what it does. | ||