Remix.run Logo
noduerme 6 hours ago

I'm kinda doing this in a back-and-forth way over each section with openclaw, and one nice thing is that I've got it including the chat log for changes with each commit. I'm happy about how it's handled my personality as needing to understand all the changes it's making before committing. So I kind of want something interactive like that -- this isn't a codebase I can trust an LLM to just fire and forget (as evidenced by some massive misunderstandings about rewiring message strings and parameter names like "_meta" and ".meta" and "_META" that meant completely different things which the LLM accidentally crossed and merged at some point, before I caught it and forced it to untangle the whole mess -- which it only did well because there were good logs).

I sort of do need something with persistent memory and personality... or a way to persist it without spending a lot of time trying to bring it back up to speed... it's not exactly specific tasks being tracked, I need it to have a fairly good grasp on the entire ecosystem.

wyre 5 hours ago | parent [-]

how big is the codebase? how often is the agent writing to memory? you might be able to get away with just appending it to the project's CLAUDE.md? you might also want to check out https://github.com/probelabs/probe

noduerme 5 hours ago | parent [-]

Hm. That looks a lot more granular, which is interesting... I'm not sure it would help me on this.

The codebase is small enough that I can basically go and find all the changes the LLM executed with each request, and read them with a very skeptical eye to verify that they look sane, and ask it why it did something or whether it made a mistake if anything smells wrong. That said, the code I'm rewriting is a genetic algorithm / evaluation engine I wrote years ago, which itself writes code that it then evaluates; so the challenge is having the LLM make changes to the control structure, with the aim of having an agent be able to run the system at high speed and read the result stream through a headless API, without breaking either the writing or evaluation of the code that the codebase itself is writing and running. Openclaw has a surprisingly good handle on this now, after a very very very long running session, but most of the problems I'm hitting still have to do with it not understanding that modifying certain parameters or names could cause downstream effects in the output (eval code) or input (load files) of the system as it's evolving.