| ▲ | devinplatt 2 days ago | |
In my experience, using LLMs to code encouraged me to write better documentation, because I can get better results when I feed the documentation to the LLM. Also, I've noticed failure modes in LLM coding agents when there is less clarity and more complexity in abstractions or APIs. It's actually made me consider simplifying APIs so that the LLMs can handle them better. Though I agree that in specific cases what's helpful for the model and what's helpful for humans won't always overlap. Once I actually added some comments to a markdown file as note to the LLM that most human readers wouldn't see, with some more verbose examples. I think one of the big problems in general with agents today is that if you run the agent long enough they tend to "go off the rails", so then you need to babysit them and intervene when they go off track. I guess in modern parlance, maintaining a good codebase can be framed as part of a broader "context engineering" problem. | ||
| ▲ | mcv 2 days ago | parent [-] | |
I've also noticed that going off the rails. At the start of a session, they're pretty sharp and focused, but the longer the session lasts, the more confused they get. At some point they start hallucinating bullshit that they wouldn't have earlier in the session. It's a vital skill to recognise when that happens and start a new session. | ||