Remix.run Logo
jtrn a day ago

Here is my amateur understanding of the architecture: Fine-tune on the fly by using degrees of surprise to update a separate/new memory network that matches the base model, and just call that network for each token iteration.

So if we are viewing this through the needle in hey stack lens: The needle was very surprising for the base model, so going forward, when it see anything of the same nature, the memory module will not just give you hay, but the needle, because it made a special note of it when it went through the haystack 1 million tokens ago, because the needle was surprising.

The Transformer's normal attention mechanism is already secretly trying to be a long-term memory system. Every time it writes a new KV pair into the cache, it’s desperately trying to “remember” that token forever.

But it’s doing it in the dumbest possible way: by hoarding an ever-growing pile of raw vectors, then frantically dot-product searching through the pile every single step. It’s like a hoarder who never throws anything away and has to rummage through mountains of junk to find the one receipt they need. Of course it chokes at long contexts.

Titans/MIRAS looks at that mess and says: “Why store memory in a growing garbage pile of vectors? Store it in the weights of a deep neural network instead — and let that network keep training itself in real time, but only on the stuff that actually surprises it.” That’s literally it.

Using the Tim Cook Martian example: The model is cruising through boring financial numbers → attention is doing its normal thing, KV cache is growing, but nothing is really sticking.

Suddenly: “Tim Cook is a Martian.”

Normal attention would just add one more KV pair to the pile and pray it doesn’t get drowned out later.

Titans instead goes: “Holy shit, reconstruction error off the charts → this does NOT fit my current memory at all → massive gradient → actually rewrite huge chunks of the memory MLP’s weights right now so this fact is burned in forever.”

From that moment on, the memory MLP has physically changed its internal wiring. Any future query that even vaguely smells like “Tim Cook” or “Martian” will make the activations explode through the newly rewired paths and spit out a vector screaming “MARTIAN” at the frozen attention layers.

The frozen attention (which is still doing its normal job on the short window) suddenly sees this one extra “virtual token” in its context that is confidently yelling the surprising fact → it attends hard to it → the model answers as if the Martian revelation happened one token ago, even if it was 2 million tokens back.

It looks exactly like a super-attention mechanism that only “primes” or “locks in” the surprising needles and deliberately forgets or ignores the hay. And it is also a way to fine tune one the fly permanently for the current context.

I think…