Remix.run Logo
The Continual Learning Problem(jessylin.com)
99 points by Bogdanp 12 days ago | 8 comments
esafak 3 days ago | parent | next [-]

Great writeup. Are there any libraries that implement some of the methods described?

gdiamos 2 days ago | parent [-]

ScalarLM uses tokenformer adaptors by default, which have learnable key/values

https://www.scalarlm.com/blog/tokenformer-a-scalable-transfo...

skeptrune 3 days ago | parent | prev | next [-]

I appreciate that people are going beyond RAG and few shot prompting.

optimalsolver 3 days ago | parent | prev [-]

Rather than handcrafting solutions like it’s 1993, why not make robustness against forgetting part of the training objective?

Let the search algorithm figure it out.

intalentive 3 days ago | parent | next [-]

Funny you say that, this write-up recalled Stephen Grossberg's Adaptive Resonance Theory for me. The same basic ideas come up when addressing the stability-plasticity dilemma.

That said, the authors are saving this for future work. Fine-tuning is cheaper, easier, faster to validate.

>Switching to a new architecture at pretraining time has a high cost, but there are reasons we might want this (besides the better scaling behavior). The main benefit is that the model can learn to organize its memory from scratch, and once we’ve already “allocated” this high-capacity memory pool, there’s a clearer path to learning on multiple tasks and corpora over time.

This means you could "fine-tune" the model on your custom corpus at ingestion time, without having to actually train via backprop. Your corpus would be compressed into model-readable memory that updates model behavior. Then different memory units could be swapped in and out, like programs on a floppy disk. I can see this concept being especially useful for robotics.

yorwba 3 days ago | parent [-]

The memory is model-readable but not model-writable, so you still need to train via backprop to get the memory to store useful data.

vessenes 3 days ago | parent | prev | next [-]

The reason you're getting slightly downvoted, I think, is that you need to answer this question first: which of the 15T tokens are you going to evaluate for forgetting? And, please explain how doing that is different than doing another full epoch type pass over the weights.

Some of the appeal here is that this architecture (handcrafted) allows ongoing gradient descent learning as you go on a much smaller set of weights.

imtringued 2 days ago | parent | prev [-]

Elastic weight consolidation is already a thing and it's not enough.