| ▲ | otabdeveloper4 5 hours ago |
| It will be for those fixing AI slop software. (In fact, they might need several lifetimes.) |
|
| ▲ | incognito124 5 hours ago | parent | next [-] |
| Why do people think there will be fixing AI slop software? I see that opinion here and there on HN. The cost of codegen is next to nothing. It makes no sense to spend large sums of money having an engineer fix something that could be generated over and over until gods of stochasticity come in your favour. We've entered a period of single-use-plastic software, piling up and polluting everything, because it's cheaper than the alternative |
| |
| ▲ | camdenreslink 15 minutes ago | parent | next [-] | | If the AI slop software managed to get a user base, then you can't just throw it away and completely start over. You need to modify it in a way that is seamless for your users. If all code becomes single use, are users generating it for themselves? Do you think a dentist office will vibe code their own scheduling software? | |
| ▲ | GrinningFool 4 hours ago | parent | prev [-] | | When everything is generated on-demand - each exploit has to be discovered anew. No more conveniences like common libraries. This is sarcasm, but it's probably also going to get sold as a feature at some point. |
|
|
| ▲ | pllbnk 4 hours ago | parent | prev [-] |
| The problem partially is that AI can also fix AI slop. At this point I am in doubt whether code quality matters anymore in most non-critical software. You can ask an LLM if the code has quality issues and refactor to a _better_ version. It will reason through, prepare a plan and refactor. So now with this "better" code you can expect that your LLM will be able to deliver higher quality results but that's all the quality that is needed. Actually, at this point I feel that the value in software engineering is moving from coding to testing and quality assurance. |
| |
| ▲ | ezekg 4 hours ago | parent | next [-] | | In my experience, an LLM "refactoring" autonomously doesn't actually improve code quality, it simply reorganizes the mess into a new mess. | | |
| ▲ | missedthecue 2 hours ago | parent [-] | | This is my experience with human developers too so I'm not sure if there's a meaningful difference. |
| |
| ▲ | bcrosby95 4 hours ago | parent | prev | next [-] | | Sure, but also, AI will always find issues. It will never be mildly satisfied with the codebase and say so. | | |
| ▲ | missedthecue 2 hours ago | parent | next [-] | | All the frontier models tell me when there are no issues. After implementing a feature I will ask it to identify issues in my implementation, list them, and support each item they identified with technical argumentation and reasoning as to why it's an issue. If it doesn't find anything it says I didn't find anything. | |
| ▲ | pllbnk 4 hours ago | parent | prev [-] | | Not from my experience. It's true that it will always find new issues in a new session but it is happy to say so when the code is good. |
| |
| ▲ | otabdeveloper4 2 hours ago | parent | prev [-] | | > AI can also fix AI slop No it can't. AI knows nothing about software engineering, all it can do is generate code. |
|