▲ | keeda 17 hours ago | |
LLMs may not learn on the fly (yet), but these days they do have some sort of a memory that they automatically bring into their context. It's probably just a summary that's loaded into its context, but I've had dozens of conversations with ChatGPT over the years and it remembers my past discussions, interests and preferences. It has many times connected dots across conversations many months apart to intuit what I had in mind and proactively steered the discussion to where I wanted it to go. Worst case, if they don't do this automatically, you can simply "teach" them by updating the prompt to watch for a specific mistake (similar to how we often add a test when we catch a bug.) But it need not even be that cumbersome. Even weaker models do surprisingly well with broad guidelines. Case in point: https://news.ycombinator.com/item?id=42150769 | ||
▲ | yahoozoo 3 hours ago | parent [-] | |
Yeah, the memory feature is just a summary of past conversations added to the system prompt. |