| ▲ | debarshri 4 hours ago |
| This weekend, I found an issue with Microsoft's new Golang version of sqlcmd. Ran Claude code, fixed the issue, which I wouldn't have done if agent stuff did not exist. The fix was contributed back to the project. I think it is about who is contributing, intention, and various other nuances. I would still say it is net good for the ecosystem. |
|
| ▲ | mysterydip 4 hours ago | parent | next [-] |
| I think the problem is determining who is contributing, intention, and those other nuances take a human’s time and effort. And at some point the number of contributions becomes too much to sort through. |
| |
| ▲ | debarshri 4 hours ago | parent [-] | | I think building enough barriers, processes, and mechanisms might work. I don't think it needs to be human effort. | | |
| ▲ | ThrowawayR2 3 hours ago | parent | next [-] | | If it's not human effort, it costs tokens, lots of tokens, that need to be paid for by somebody. The LLM providers will be laughing all the way to the bank because they get paid once by the people who are causing the problem and paid again by the person putting up the "barriers, processes, and mechanisms" to control the problem. Even better for them, the more the two sides escalate, the more they get paid. | |
| ▲ | username223 3 hours ago | parent | prev [-] | | So open source development should be more like job-hunting and hiring, where humans feed AI-generated resumes into AI resume filters which supposedly choose reasonable candidates to be considered by other humans? That sounds... not good. |
|
|
|
| ▲ | kermatt 2 hours ago | parent | prev | next [-] |
| If you used Claude to fix the issue, built and tested your branch, and only then submitted the PR, the process is not much is different from pre-LLM days. I think the problem is where bug-bounty or reputation chasers are letting LLM's write the PRs, _without_ building and testing. They seek output, not outcomes. |
|
| ▲ | atomicnumber3 3 hours ago | parent | prev | next [-] |
| Did you actually fix the issue, or did you fix the issue and introduce new bugs? The problem is the asymmetry of effort. You verified you fixed your issue. The maintainers verified literally everything else (or are the ones taking the hit if they're just LGTMing it). Sorry, I am sure your specific change was just fine. But I'm speaking generally. How many times have I at work looked at a PR and thought "this is such a bad way to fix this I could not have come up with such a comically bad way if I tried." And naturally couldn't say this to my fine coworker whose zeal exceeded his programming skills (partly because someone else had already approved the PR after "reviewing" it...). No, I had to simply fast-follow with my own PR, which had a squashed revert of his change, with the correct fix, so that it didn't introduce race conditions into parallel test runs. And the submitter of course has no ability to gauge whether their PR is the obvious trivial solution, or comically incorrect. Therein lies the problem. |
|
| ▲ | softwaredoug 4 hours ago | parent | prev | next [-] |
| That’s the positive case IMO - a human, you, remain responsible for the fix. It doesn’t matter if AI helped. The negative case are free running OpenClaw slop cannons that could even be malicious. |
| |
| ▲ | _joel 4 hours ago | parent [-] | | I agree, but that's assuming the project accepts AI generated code, of course. Especially around the legality of accepting commits written by an AI trained on god knows what dataset. | | |
| ▲ | debarshri 3 hours ago | parent [-] | | We have been doing this lately; when we hit a roadblock with open source, we run Claude code for fixing OSS issues and contributing back. We genuinely put effort into testing it out thoroughly. We don't want to bother maintainers, as they can focus on more important issues. I think a lot of tail-end issues and bugs can be addressed in OSS. We leave it up to the maintainers to accept the PR or not, but we solve our problems as we thoroughly test the changes. |
|
|
|
| ▲ | thrance 4 hours ago | parent | prev | next [-] |
| Genuinely interested in the PR, if you would kindly care to link it. |
| |
|
| ▲ | krater23 4 hours ago | parent | prev [-] |
| And are you sure that you fixed it without creating 20 new bugs? For the reader this could mean that you never understood the bug, so how you can sure that you've done anything right? |
| |
| ▲ | saghm 3 hours ago | parent | next [-] | | How do you make sure you don't create bugs in the code you write without an LLM? I imagine for most people, the answer is a combination of self-review and testing. You can just do those same things with code an LLM helps you write and at that point you have the same level of confidence. | | |
| ▲ | xigoi an hour ago | parent [-] | | It’s much harder to understand code you didn’t write than code you wrote. |
| |
| ▲ | debarshri 3 hours ago | parent | prev | next [-] | | Pretty much sure did not create bugs. Because I validated it thoroughly, as I had to deploy it into production in a fintech environment. So I am pretty much confident as well as convinced about the change. But then I know what I know. | | |
| ▲ | wussboy 2 hours ago | parent [-] | | This is the fundamental problem. You know what you know, but the maintainer does not, and cannot possibly take the time to find out what every single PR authors knows before they accept it. AI breaks every part of the Web of trust that is foundational to knowing anything. |
| |
| ▲ | Aurornis 3 hours ago | parent | prev | next [-] | | Using an LLM as an assistant isn’t necessarily equivalent to not understanding the output. A common use case of LLMs is to quickly search codebases and pinpoint problems. | |
| ▲ | mycall 4 hours ago | parent | prev | next [-] | | Code complexity is often the cause for more bugs. Complexity naturally comes from more code. It is not uncommon. As they say, the best code I ever wrote was no code. | | | |
| ▲ | silverwind 3 hours ago | parent | prev [-] | | If the test coverage is good it will most likely be fine. |
|