Remix.run Logo
nicoburns 12 hours ago

Something I've noticed is that AI code generation makes it easier/faster to generate code while shifting more work of the work of keeping code correct and maintainable to the code review stage. That can be highly problematic for open source projects that are typically already bottlenecked by maintainer review bandwidth.

It can be mitigated by PR submitters doing a review and edit pass prior to submitting a PR. But a lot of submitters don't currently do this, and in my experience the average quality of PRs generated by AI is definitely significantly lower than those not generated by AI.

pgroves 11 hours ago | parent | next [-]

I was expecting this to be the point of the article when I saw the title. Popular projects appear to be drowning in PRs that are almost certainly AI generated. OpencodeCli has 1200 open at the moment[1]. Aider, which is sort of abandoned has 200 [2]. AFAIK, both projects are mostly one maintainer.

[1] https://github.com/anomalyco/opencode/pulls [2] https://github.com/Aider-AI/aider/pulls

matkoniecz 9 hours ago | parent [-]

Even not-so popular niche projects are getting LLM spam. Curiously, at least where Ia m active most comes from account with India-related usernames.

Some are opening PRs, some are posting comments in issues that repeat what was said already, just in more words.

trey-jones 12 hours ago | parent | prev | next [-]

To me, an old guy, I would rather have LLM doing (assisting with) the code review than the actual code production. Is that stupid?

electroly 11 hours ago | parent | next [-]

LLMs are great at reviewing. This is not stupid at all if it's what you want; you can still derive benefit from LLMs this way. I like to have them review at the design level where I write a spec document, and the LLM reviews and advises. I don't like having the LLM actually write the document, even though they are capable of it. I do like them writing the code, but I totally get it; it's no different than me and the spec documents.

trey-jones 10 hours ago | parent | next [-]

Right, I'd say this is the best value I've gotten out of it so far: I'm planning to build this thing in this way, does that seem like a good idea to you? Sometimes I get good feedback that something else would be better.

torginus 9 hours ago | parent | prev [-]

If LLMs are great at reviewing, why do they produce the quality of code they produce?

electroly 8 hours ago | parent | next [-]

Reviewing is the easier task: it only has to point me in the right direction. It's also easy to ignore incorrect review suggestions.

gjadi 9 hours ago | parent | prev [-]

Imho it's because you worked before asking the LLM for input, thus you already have information and an opinion about what the code should look like. You can recognize good suggestions and quickly discard bad ones.

It's like reading, for better learning and understanding, it is advised that you think and question the text before reading it, and then again after just skimming it.

Whereas if you ask first for the answer, you are less prepared for the topic, is harder to form a different opinion.

It's my perception.

hxugufjfjf 8 hours ago | parent [-]

Its also because they are only as good as they are with their given skills. If you tell them "code <advandced project> and make no x and y mistakes" they will still make those mistakes. But if you say "perform a code review and look specifically for x and y", then it may have some notion of what to do. That's my experience with using it for both writing and reviewing the same code in different passes.

groundzeros2015 12 hours ago | parent | prev | next [-]

This makes sense to me.

I need to make decisions about how things are implemented. Even if it can pick “a way” that’s not necessarily going to be a coherent design that I want.

In contrast for review I already made the choices and now it’s just providing feedback. More information I can choose to follow or ignore.

Leynos 11 hours ago | parent | prev [-]

Take a look at CodeRabbit and Sourcery if you want to give that a go.

echelon 12 hours ago | parent | prev [-]

The maintainers can now do all the work themselves.

With the time they save using AI, they can get much more work done. So much that having other engineers learn the codebase is probably not worth it anymore.

Large scale software systems can be maintained by one or two folks now.

Edit: I'm not going to get rate limited replying to everyone, so I'll just link another comment:

https://news.ycombinator.com/item?id=46765785

tracker1 10 hours ago | parent | next [-]

No, because proper QA/QC will be the bottleneck.... AI is ill-suited to test for fit/use. I built an ANSi terminal with AI assist (rust/wasm/canvas)... it literally took longer to get the scrollback feature working with keyboard and mousewheel interactions than it took to get the basic rendering correct. And there are still a few bugs in it.

In the end, you should not just skip QA/QC and fitness testing. Many things can fit a technical spec and still be absolutely horrible. With AI assisted developmnet, imo it's that much more important to get the UX right. I don't want 10x the apps if they're all half-implemented garbage that look like garbage are hard to use and just painful to install, maintain and use.

Library creation still has a place here... and so far, getting AI code assistants to actually understand and use a given library that may be less popular has been at the very least, interresting.

wooderson_iv 12 hours ago | parent | prev | next [-]

Do you have anecdotes or evidence of this or is it speculative?

j16sdiz 12 hours ago | parent | prev | next [-]

Those are the most mentally exhausting task. Are you sure putting this burden on single person is good?

erelong 8 hours ago | parent | prev | next [-]

Yeah, it should change things but also free up other energies to work on things

shafyy 12 hours ago | parent | prev | next [-]

Not sure if you're being sarcastic or not?

matkoniecz 9 hours ago | parent | prev [-]

> So much that having other engineers learn the codebase is probably not worth it anymore.

> Large scale software systems can be maintained by one or two folks now.

No, LLMs are not so powerful yet.