| ▲ | ramoz 8 hours ago | ||||||||||||||||||||||||||||||||||
I struggle with these abstractions over context windows, esp when anthropic is actively focused on improving things like compaction, and knowing the eventual* goal is for the models to yave real memory layers baked in. Until then we have to optimize with how agents work best and ephemeral context is a part of that (they weren’t RL’d/trained with memory abstractions so we shouldn’t use them at inference either). Constant rediscovery that is task specific has worked well for me, doesn’t suffer from context decay, though it does eat more tokens. Otherwise the ability to search back through history is a valuable simple git log/diff or (rip)grep/jq combo over the session directory. Simple example of mine: https://github.com/backnotprop/rg_history | |||||||||||||||||||||||||||||||||||
| ▲ | AndyNemmity 8 hours ago | parent [-] | ||||||||||||||||||||||||||||||||||
There is certainly a level where at any time you could be building some abstraction that is no longer required in a month, or 3. I feel that way too. I have a lot of these things. But the reality is, it doesn't really happen that often in my actual experience. Everyone is very slow as a whole to understand what these things mean, so far you get quite a bit of time just with an improved, customized system of your own. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||