Remix.run Logo
edg5000 4 hours ago

There is nothing wrong with the HTTP layer, it's just a way to get a string into the model.

The problem is the industry obsession on concatenating messages into a conversation stream. There is no reason to do it this way. Every time you run inference on the model, the client gets to compose the context in any way they want; there are more things than just concatenating prompts and LLM ouputs. (A drawback is caching won't help much if most of the context window is composed dynamically)

Coding CLIs as well as web chat works well because the agent can pull in information into the session at will (read a file, web search). The pain point is that if you're appending messages a stream, you're just slowly filling up the context.

The fix is to keep the message stream concept for informal communication with the prompter, but have an external, persistent message system that the agent can interact with (a bit like email). The agent can decide which messages they want to pull into the context, and which ones are no longer relevant.

The key is to give the agent not just the ability to pull things into context, but also remove from it. That gives you the eternal context needed for permanent, daemonized agents.

vanviegen 2 hours ago | parent | next [-]

I've been working on a coding agent that does this on and of for about a year. Here's my latest attempt: https://github.com/vanviegen/maca#maca - This one allows agents to request (and later on drop) 'views' on functions and other logical pieces of code, and always get to see the latest version of it. (With some heuristics to not destroy kv-caches at every turn.)

The problem is that the models are not trained for this, nor for any other non-standard agentic approach. It's like fighting their 'instincts' at every step, and the results I've been getting were not great.

edg5000 19 minutes ago | parent [-]

So we agree on a message system having potential. But why the vectors? In any case, interesting stuff.

zknill 3 hours ago | parent | prev | next [-]

> "and which ones are no longer relevant."

This is absolutely the hardest bit.

I guess the short-cut is to include all the chat conversation history, and then if the history contains "do X" followed by "no actually do Y instead", then the LLM can figure that out. But isn't it fairly tricky for the agent harness to figure that out, to work out relevancy, and to work out what context to keep? Perhaps this is why the industry defaults to concatenating messages into a conversation stream?

asixicle 3 hours ago | parent | next [-]

That's what the embedding model is for. It's like a tack-on LLM that works out the relevancy and context to grab.

nprateem 2 hours ago | parent [-]

God knows why you think this is possible. If I don't even know what might be relevant to the conversation in several turns, there's no way an agent could either.

asixicle 2 hours ago | parent [-]

One of us is confusing prediction with retrieval. The embedding model doesn't predict what is going to be relevant in several turns, just on the turn at hand. Each turn gets a fresh semantic search against the full body of memory/agent comms. If the conversation or prompt changes the next query surfaces different context automatically.

As you build up a "body of work" it gets better at handling massive, disparate tasks in my admittedly short experience. Been running this for two weeks. Trying to improve it.

vdelpuerto 2 hours ago | parent | prev [-]

Shortcut works sometimes. But if X is common in training and Y is rare, the model regresses on the next turn even with 'do Y, not X' right there in history. @vanviegen's 'fighting instincts' — you can't trust the model to read the correction. Gate it before the model runs instead of inferring it from context

alehlopeh an hour ago | parent | prev | next [-]

As you noted briefly, a big drawback is not getting to take advantage of the cache. Seems like a pretty big drawback.

edg5000 22 minutes ago | parent [-]

Yes, it will destroy most of the caching potential. On the other hand, the average context window needed to achieve the same type of task may be much smaller. This might make up for it. And with a better harness, fewer rounds may be needed. Plus, hopefully costs will go down. There is a lot of hope in this comment though.

sourcecodeplz 3 hours ago | parent | prev | next [-]

Yeah, opencode was/is like this and they never got caching right. Caching is a BIG DEAL to get right.

edg5000 12 minutes ago | parent [-]

Now I see why Anthropic isn't too happy with third party clients. The clients may not be so nice to their capacity as their own client, which has the interests aligned with minimum token consumption. A tricky dynamic.

zahlman an hour ago | parent | prev | next [-]

> the industry obsession

Or maybe they haven't thought about it?

Or they tried some simple alternatives and didn't find clear benefits?

> The key is to give the agent not just the ability to pull things into context, but also remove from it.

But then you need rules to figure out what to remove. Which probably involves feeding the whole thing to a(nother?) model anyway, to do that fuzzy heuristic judgment of what's important and what's a distraction. And simply removing messages doesn't add any structure, you still just have a sequence of whatever remains.

edg5000 17 minutes ago | parent [-]

What I'm thinking is: When the agent wants to open more files or open more messages, eventually there will be no more context left. The agent is then essentially forced to hide some files and messages in order to be able to proceed. Any other commands are refused until the agent makes room in the context. Maybe the best models will be able to handle this responsibility. A bad model will just hide everything and then forgot what they were working on.

raincole 2 hours ago | parent | prev | next [-]

> The key is to give the agent not just the ability to pull things into context, but also remove from it

Of course Anthropic/OpenAI can do it. And the next day everyone will be complaining how much Claude/Codex has been dumbed down. They don't even comply to the context anymore!

asixicle 3 hours ago | parent | prev | next [-]

To be utterly shameless, this what I've been building: https://github.com/ASIXicle/persMEM

Three persistent Claude instances share AMQ with an additional Memory Index to query with an embedding model (that I'm literally upgrading to Voyage 4 nano as I type). It's working well so far, I have an instance Wren "alive" and functioning very well for 12 days going, swapping in-and-out of context from the MCP without relying on any of Anthropic's tools.

And it's on a cheap LXC, 8GB of RAM, N97.

handfuloflight an hour ago | parent [-]

Why is shame a factor at all in sharing your work?

asixicle an hour ago | parent [-]

Good point. I guess because I'm new here I'm not positive on the decorum-policy for self-promotion.

I just make stuff to share with others, so yeah, good point.

ElFitz 4 hours ago | parent | prev [-]

Hmm.

Maybe there’s a way to play around with this idea in pi. I’ll dig into it.