| ▲ | HarHarVeryFunny 3 hours ago | |
That's not quite how it works, and anyways if the model can't generate an accurate find/replace string, why would you expect it to do any better generating accurate commands to drive your editor (assuming it knew how do do that in the first place) ?! The way edits happen is that the agent (local) first tells the model (typically remote) that it has an edit tool (e.g. taking parameters file name, find string and replace string). If the model decides it wants to edit a file then it'll invoke this edit tool, which just results in a blob of JSON being put in the model's response specifying the edit (filename, etc). The agent then receives the response, intercepts this JSON blob, sees that it is an edit request and does what is asked. The problem the article is describing is that the edit request (tool invocation) generated by the model isn't always 100% accurate. Even if the agent told the model it had a tool to invoke an actual editor, say sed, assuming the model knew how to use sed, this is still going to fail if the edit request cannot be interpreted literally by the editor (due to being inaccurate). | ||
| ▲ | cyanydeez 2 minutes ago | parent [-] | |
Seems like it's veering towards a per-model protocol similar to the expectation that these models will develop their own languages to speak among themselves as agents. The trouble is though, because it's all indeterminant slop, every model will break in small ways that you're back to indeterminancy and building a harness ontop of the harness. Still, <nerd snipe>, there's probably a way to get the local model and arbitrary remote model to agree on how to make a method call. But the only way that will be fruitful if you find a highly reproducible set of tuples within the model's shared space. | ||