Remix.run Logo
iterateoften 4 days ago

Garbage in, garbage out.

The llm is forced to eat its own output. If the output is garbage, its inputs will be garbage in future passes. How code is structured makes the llm implement new features in different ways.

aspenmartin 4 days ago | parent [-]

Why would “messy” code be garbage? Also LLMs do a great job even today at assessing what code is trying to do and/or asking you for more context. I think the article is well balanced though: it’s probably worth it for the next few months to try to help the agent out a bit with code quality and high level guidance on coding practices. But as OP says this is clearly temporary.

iterateoften 4 days ago | parent | next [-]

The definitions of what is messy or clean will change will llms…

But there will always be a spectrum of structures that are better for the llm to code with, and coding with less optimal patterns will have negative feedback effects as the loop goes on.

aspenmartin 4 days ago | parent [-]

I agree with you but you can dedicate tokens to fixing the bad code that agents do today. I don’t disagree with anything you’re saying. I think the practical implication is instead of pain and jira we’ll just have dedicated audit and refactor token budgets.

SpicyLemonZest 4 days ago | parent | prev [-]

I'm dealing with a situation right now where a critical mass of "messy" code means that nobody, human or LLM, can understand what it is trying to do or how a straightforward user-specified update should be applied to the underlying domain objects. Multiple proposed semantics have failed so far.

tracker1 4 days ago | parent [-]

On the plus side.. AI is pretty good at creating (often excessive) tests around a given codebase in order to (re)implement the utility using different backends or structures. The one thing to look out for is that the agent does NOT try to change a failing test, where the test is valid, but the code isn't.