| ▲ | David-Brug-Ai 2 hours ago | |
The context degradation problem gets much worse when you have multiple agents or models touching the same project. One agent compacts, loses what it knew, and now the human is the only source of truth for what actually happened vs what was reported done. If that human isn't a coder, they can't verify by reading the source either. I've been working on this and landed on a pattern I call a "mechanical ledger", basically a structured state file that sits outside any context window and gets updated as a side effect of work, not as a step anyone remembers to do. Every commit writes to it, every failed patch writes to it, every test run writes to it. When a session starts (or an agent compacts), it reads the ledger and rebuilds context from ground truth instead of from memory. Its not a novel idea really, its basically what ops teams do with runbooks and state files, but applied to the AI agent handoff problem. The interesting bit is making the updates mechanical so no agent can forget to do it. | ||