Remix.run Logo
achierius 19 hours ago

This is an important distinction though. LLMs don't have any persistent 'state': they have their activations, their context, and that's it. They only know what's pre-trained, and what's in their context. Now, their ability to do in-context learning is impressive, but you're fundamentally still stuck with the deviations and, eventually, forgetting that characterizes these guys -- while a human, while less quick on the uptake, will nevertheless 'bake in' the lessons in a way that LLMs currently cannot.

In some ways this is even more impressive -- every prompt you make, your LLM is in effect re-reading (and re-comprehending) your whole codebase, from scratch!