Remix.run Logo
AnimalMuppet 6 hours ago

I was going to say that an LLM can't do this, because it loses everything at the end of the session. But... could an LLM write out its "state" or "understanding" so that you could recover that for the next session? Do any LLMs currently have that ability?

jazzypants 6 hours ago | parent | next [-]

It's very common, but (like most things with LLMs) it's not as deterministic as you might imagine. A common technique for agents is to have them create a "handoff" document (usually markdown) that summarizes the previous session-- goals, important files/links, etc. There are dozens of proprietary ways of doing this, and Claude Code automates the process with its /compact command and even does auto-compaction as you reach your context limit. ChatGPT has been doing autocompaction since the beginning as it started out with a comically small context window.

bathtub365 3 hours ago | parent [-]

The problem with auto compaction is that you aren’t given the opportunity to review its compacted understanding to confirm that it’s correct or doesn’t contain large omissions. I try to avoid letting it compact whenever possible and stick to plans that I review because it seems to get extremely dumb after an auto compaction.

jazzypants 3 hours ago | parent [-]

Yeah, I still find Opus to be pretty unreliable once you get past around 150K tokens, so I usually run a custom hand-off command at that point that extracts specific elements to specialized documents. The command contains a "Documentation Map" with single line summaries of each of those documents to help the agent sort everything out. Like most memory systems, it works pretty well around 80% of the time. I messed around with RAG and other complex solutions, and I never got much better results than my KISS system.

jinwoo68 2 hours ago | parent | prev | next [-]

This brings up a philosophical question. Are we willing to hand over the role of "theory building" to LLM if that's even possible? If yes, what will be the role of human beings?

It may destroy many foundational assumptions that humans have had for thousands of years.

jhartikainen 3 hours ago | parent | prev [-]

In theory maybe in some sense, but if we read Naur's definition of "theory" in a more strict or philosophical way, they can't in full. An LLM can't build a theory, because it doesn't have "real" experience, it's essentially just following rules. It also can't really argue or justify its choices like a person can.

This is discussed in the "Ryle's Notion of Theory" section of the original essay.