Remix.run Logo
EnPissant 3 days ago

> 4. persisting context across compactions

> LLMs forget things as their context grows. When a conversation gets long, the context window fills up, and Claude Code starts compacting older messages. To prevent the agent from forgetting the skill’s instructions during a long thread, Claude Code registers the invoked skill in a dedicated session state.

> When the conversation history undergoes compaction, Claude Code references this registry and explicitly re-injects the skill’s instructions: you never lose the skill guardrails to context bloat.

If true, this means that over time a session can grow to contain all or most skills, negating the benefit of progressive disclosure. I would expect it would be better to let compaction do its thing with the possibility of an agent re-fetching a skill if needed.

I don't trust the article though. It looks like someone just pointed a LLM at the codebase and asked it to write an article.

KaseKun 3 days ago | parent [-]

Author here,

> It looks like someone just pointed a LLM at the codebase and asked it to write an article.

Not entirely true. I pointed an LLM at the codebase to get me to the right files for understanding skills, and to map out the dependencies and lifecycles - Then I spent quite a bit of time reading the code myself and writing about it.

An AI review at the end of the writing (to "sharpen" the language) unfortunately brought in a couple of AI fingerprints (note the "mic drop" comment above)

edit: write -> right (its 8am)