| ▲ | CPLX 4 hours ago | ||||||||||||||||
I'm with you on that, but I have to say I have been doing that aggressively, and it's pretty easy for Claude Code at least to ignore the prompts, commands, Markdown files, README, architecture docs, etc. I feel like I spend quite a bit of time telling the thing to look at information it already knows. And I'm talking about when I HAVE actually created various documents to use and prompts. As a specific example, it regularly just doesn't reference CLAUDE.md and it seems pretty random as to when it decides to drop that out of context. That's including right at session start when it should have it fresh. | |||||||||||||||||
| ▲ | Aurornis 4 hours ago | parent | next [-] | ||||||||||||||||
> and it's pretty easy for Claude Code at least to ignore the prompts, commands, Markdown files, README, architecture docs, etc. I would agree with that! I've been experimenting with having Claude re-write those documents itself. It can take simple directives and turn them into hierarchical Markdown lists that have multiple bullet points. It's annoying and overly verbose for humans to read, but the repetition and structure seems to help the LLM. I also interrupt it and tell it to refer back to CLAUDE.md if it gets too off track. Like I said, though, I'm not really an LLM power user. I'd be interested to hear tips from others with more time on these tools. | |||||||||||||||||
| ▲ | zarp 4 hours ago | parent | prev | next [-] | ||||||||||||||||
> it seems pretty random as to when it decides to drop that out of context Overcoming this kind of nondeterministic behavior around creating/following/modifying instructions is the biggest thing I wish I could solve with my LLM workflows. It seems like you might be able to do this through a system of Claude Code hooks, but I've struggled with finding a good UX for maintaining a growing and ever-changing collection of hooks. Are there any tools or harnesses that attempt to address this and allow you to "force" inject dynamic rules as context? | |||||||||||||||||
| |||||||||||||||||
| ▲ | kierangill 3 hours ago | parent | prev | next [-] | ||||||||||||||||
Agreed here. A key theme, which isn’t terribly explicit in this post, is that your codebase is your context. I’ve found that when my agent flies off the rails, it’s due to an underlying weakness in the construction of my program. The organization of the codebase doesn’t implicitly encode the “map”. Writing a prompt library helps to overcome this weakness, but I’ve found that the most enduring guidance comes from updating the codebase itself to be more discoverable. | |||||||||||||||||
| |||||||||||||||||
| ▲ | candiddevmike 3 hours ago | parent | prev [-] | ||||||||||||||||
Because, in my experience/conspiracy theory, the model providers are trying to make the models function better without having to have these kinds of workarounds. And so there's a disconnect where folks are adding more explicit instructions and the models are being trained to effectively ignore them under the guise of using their innate intuition/better learning/mixture of experts. | |||||||||||||||||