▲ | JohnBooty a day ago | |
There are a number of other issues such the ethical and environmental ones. However, this one in isolation...
I'm struggling to understand this particular angle.Humans are capable of generating extremely poor code. Improperly supervised LLMs are capable of generating extremely poor code. How is this is an LLM-specific problem? I believe part of (or perhaps the entire) the argument here is that LLMs certainly enable more unqualified contributors to generate larger quantities of low-quality code than they would have been able to otherwise. Which... is true. But still I'm not sure that LLMs are the problem here? Nobody should be submitting unexpected, large, hard-to-review quantities of code in the first place, LLM-aided or otherwise. It seems to me that LLMs are, at worst, exposing an existing flaw in the governance process of certain projects? | ||
▲ | a day ago | parent | next [-] | |
[deleted] | ||
▲ | wodenokoto a day ago | parent | prev | next [-] | |
It means, if you can’t write it, they don’t trust you to be able to evaluate it either. As for humans who can’t write code, their code doesn’t tend to look like they can. | ||
▲ | SAI_Peregrinus a day ago | parent | prev [-] | |
> Nobody should be submitting unexpected, large, hard-to-review quantities of code in the first place, Without LLMs, people are less likely to submit such PRs. With LLMs they're more likely to do so. This is based on recent increases in such PRs pretty much all projects have seen. Current LLMs are extremely sycophantic & encourage people to think they're brilliant revolutionary thinkers coming up with the best <ideas, code, etc> ever. Combined with the marketing of LLMs as experts it's pretty easy to see why some people fall for the hype & believe they're doing valuable work when they're really just dumping slop on the reviewers. |