| ▲ | ip26 3 hours ago | |
Maybe the right answer is “why not both”, but subagents can also be used for that problem. That is, when something isn’t going as expected, fork a subagent to solve the problem and return with the answer. It’s interesting to imagine a single model deciding to wipe its own memory though, and roll back in time to a past version of itself (only, with the answer to a vexing problem) | ||
| ▲ | jon-wood 2 hours ago | parent [-] | |
I forget where now but I'm sure I read an article from one of the coding harness companies talking about how they'd done just that. Effectively it could pass a note to its past self saying "Path X doesn't work", and otherwise reset the context to any previous point. I could see this working like some sort of undo tree, with multiple branches you can jump back and forth between. | ||