Remix.run Logo
apsurd 6 hours ago

From your link: > Closing that gap, building systems that capture and encode process knowledge rather than just decision records, is the highest-value problem in enterprise AI right now.

I buy this. What exactly is the export artifact that encodes this built up context? Is it the entire LLM conversation log. My casual understanding of MCP is service/agent to agent "just in time" context which is different from "world model" context, is that right?

i'm curious is there's an entirely new format for this data that's evolving, or if it's as blunt as exporting the entire conversation log or embeddings of the log, across AIs.

7777777phil 6 hours ago | parent [-]

The MCP point is right, though tbh MCP is more like plumbing than memory. Execution-time context for tools and resources. The world model is a different thing entirely, it needs to persist across sessions, accumulate, actually be queryable.

In practice it's mostly RAG over structured artifacts. Process docs, decision logs, annotated code and so on. Conversation history works better than you'd expect as a starting point but gets noisy fast and I haven't seen a clean pruning strategy anywhere...

On the format question imo nobody really knows yet. Probably ends up as some kind of knowledge graph with typed nodes that MCP servers expose or so, but I haven't seen anyone build that cleanly. Most places are still doing RAG over PDFs so. That tells you where the friction is.