▲ | timr 4 days ago | ||||||||||||||||
Humans have a limited short-term memory. Humans do not literally forget everything they've ever learned after each Q&A cycle. (Though now that I think of it, I might start interrupting people with “SUMMARIZING CONVERSATION HISTORY!” whenever they begin to bore me. Then I can change the subject.) | |||||||||||||||||
▲ | ivan_gammel 4 days ago | parent | next [-] | ||||||||||||||||
LLMs do not „forget“ everything completely either. Probably all major tools by now consume information from some form of memory (system prompt, Claude.md, project files etc) before your prompt. Claude Code rewrites the Claude.md, ChatGPT may modify the chat memory if it finds it necessary etc. | |||||||||||||||||
| |||||||||||||||||
▲ | faangguyindia 3 days ago | parent | prev | next [-] | ||||||||||||||||
the "context" is the short term memory equivalent of LLM. Long term memory is its training data. | |||||||||||||||||
▲ | BeetleB 4 days ago | parent | prev [-] | ||||||||||||||||
Both true and irrelevant. I've yet had the "forgets everything" to be a limiting factor. In fact, when using Aider, I aggressively ensure it forgets everything several times per session. To me, it's a feature, not a drawback. I've certainly had coworkers who I've had to tell "Look, will you forget about X? That use case, while it look similar, is actually quite different in assumptions, etc. Stop invoking your experiences there!" |