Remix.run Logo
aszen a day ago

The most imp part is editing code, to do that reliably Claude models are trained on their own str replace tool schema I think. Models find it hard to modify existing code, they also can't just rewrite whole files bcz that's expensive and doesn't scale.

embedding-shape a day ago | parent | next [-]

Here's where I was hoping openly available models would shine. Some community gets together, starts sharing successful/failed runs with their own agent, start building a open dataset for their specific syntax and tooling. then finally finetune new variants with it for the community.

libraryofbabel a day ago | parent | prev [-]

Yeah, there is definitely some RLVR training going on for the Claude LLMs to get them good at some of the specific tool calls used in Claude Code, I expect. Having said that, the string replacement tool schema for file edits is not very complicated at all (you can see it in the tool call schema Claude Code sends to the LLM), so you could easily use that in your own 200-300 line agent if you wanted to make sure you're playing to the LLM's strengths.

aszen a day ago | parent [-]

Yeah that's one example, but I suspect they train the model on entire sequences of tool calls, so unless you prompt the model exactly as them you won't get the same results.

There's a reason they won the agent race, their models are trained to use their own tools.

libraryofbabel a day ago | parent [-]

Agree, the RLVR tasks are probably long series of tool calls at this point doing complex tasks in some simulated dev environment.

That said, I think it's hard to say how much of a difference it really makes in terms of making Claude Code specifically better than other coding agents using the same LLM (versus just making the LLM better for all coding agents using roughly similar tools). There is probably some difference, but you'd need to run a lot of benchmarks to find out.

aszen a day ago | parent [-]

Agreed it probably contributes to the model improving for all agents but crucially it is verifiably better against their own agent. So they get a good feedback loop to improve both