▲ | sothatsit 2 days ago | |
I was arguing against your dismissal of the value proposition of LLMs. I wasn't arguing about the case of open-source maintainers getting spammed by low-quality issues and PRs (where I think we agree on a lot of points). The way that you argued that the value proposition of LLMs makes no sense takes a really black-and-white view of modern AI. There are actually a lot of tasks where verification is easier than doing the task yourself, even in areas where you are not an expert. You just have to actually do the verification (which is the primary problem with open-source maintainers getting spammed by people who do not verify anything). For example, I have recently been writing a proxy for work, but I'm not that familiar with networking setups. But using LLMs, I've been able to get to a robust solution that will cover our use-cases. I didn't need to be an expert in networking. My experience in other areas of computer science combined with LLMs to help me research let me figure out how to get our proxy to work. Maybe there is some nuance I am missing, but I can verify that the proxy correctly gets the traffic and I can figure out where it needs to go, and that's enough to make progress. There is some academic purity lost in this process of using LLMs to extend the boundary of what you can accomplish. This has some pretty big negatives, such as allowing people with little experience to create incredibly insecure software. But I think there are a lot more cases where if you verify the results you get, and you don't try to extend too far past your knowledge, it gives people great leverage to do more. This is to say, you don't have to be an expert to use an LLM for a task. But it does help a lot to have some knowledge about related topics at least, to ground you. Therefore, I would say LLMs can greatly expand the scope of what you can do, and that is of great value (even if they don't help you do literally everything with a high likelihood of success). Additionally, coding agents like Claude Code are incredible at helping you get up-to-speed with how an existing codebase works. It is actually one of the most amazing use-cases for LLMs. It can read a huge amount of code and break it down for you so you can start figuring out where to start. This would be of huge help when trying to contribute to someone else's repository. LLMs can also help you with finding where to make a change, writing the patch, setting up a test environment to verify the patch, looking for project guidelines/styleguides to follow, helping you to review your patch against those guidelines, and helping you to write the git commit and PR description. There's so many areas where they can help in open-source contributions. The main problem in my eyes is people that come to a project and make a PR because they want the "cred" of contributing with the least possible effort, instead of because they have an actual bug/feature they want to fix/add to the project. The former is noise, but the latter always has at least one person who benefits (i.e., you). |