Remix.run Logo
Egeozin 2 hours ago

Thx, ‘similarity search mixing semantically related data with genuinely valuable data’ and about this ‘adding up during ingestion’ are exactly why we moved from v1 to v2 for companion and conversational use cases. In this domain, scratchpad-like systems work well, and there’s usually no need to over-engineer retrieval.

I think v3 is categorically different. First, the LLM decides what matters, and we believe that scales better than having the engineer impose too much structure upfront and fail to create the right environment for the model, which was part of the limitation in v1. Second, this does not need to be irreversible if you support it with a simple harness, in our case, git and worktrees. V3 is also more applicable to companions or agents that require stronger problem-solving capabilities, such as coding.

We plan to publish our benchmarking results soon, so others can evaluate the approach for themselves.