Remix.run Logo
pllbnk 4 hours ago

The problem partially is that AI can also fix AI slop. At this point I am in doubt whether code quality matters anymore in most non-critical software. You can ask an LLM if the code has quality issues and refactor to a _better_ version. It will reason through, prepare a plan and refactor. So now with this "better" code you can expect that your LLM will be able to deliver higher quality results but that's all the quality that is needed.

Actually, at this point I feel that the value in software engineering is moving from coding to testing and quality assurance.

ezekg 4 hours ago | parent | next [-]

In my experience, an LLM "refactoring" autonomously doesn't actually improve code quality, it simply reorganizes the mess into a new mess.

missedthecue 2 hours ago | parent [-]

This is my experience with human developers too so I'm not sure if there's a meaningful difference.

bcrosby95 4 hours ago | parent | prev | next [-]

Sure, but also, AI will always find issues. It will never be mildly satisfied with the codebase and say so.

missedthecue 2 hours ago | parent | next [-]

All the frontier models tell me when there are no issues. After implementing a feature I will ask it to identify issues in my implementation, list them, and support each item they identified with technical argumentation and reasoning as to why it's an issue.

If it doesn't find anything it says I didn't find anything.

pllbnk 4 hours ago | parent | prev [-]

Not from my experience. It's true that it will always find new issues in a new session but it is happy to say so when the code is good.

otabdeveloper4 2 hours ago | parent | prev [-]

> AI can also fix AI slop

No it can't.

AI knows nothing about software engineering, all it can do is generate code.