Remix.run Logo
MrScruff 11 hours ago

> This is speculative, but I suspect that if we dropped one of the latest, most capable open-weight LLMs, such as GLM-5, into a similar harness, it could likely perform on par with GPT-5.4 in Codex or Claude Opus 4.6 in Claude Code.

Unless I'm misunderstanding what's being described here, running Claude Code with different backend models is pretty common.

https://docs.z.ai/scenario-example/develop-tools/claude

It doesn't perform on par with Anthropic's models in my experience.

barnabee 7 hours ago | parent | next [-]

I've found that on some projects maybe 70-80% of what can be done with Sonnet 4.6 in OpenCode can be done with a cheaper model like MiMo V2 Pro or similar. On others Sonnet completely outperforms. I'm not sure why. I only find Opus to be worth the extra cost maybe 5% of the time.

I also find OpenCode to be drastically better than Claude Code, to the extent that I'm buying OpenRouter API credits rather than Claude Max because Claude Code just isn't good enough.

I'm frankly amazed at what OpenCode can do with a few custom commands (just for common things like doing a quality review, etc.), and maybe an extra "agent" definition or two. For many projects even most of this isn't necessary. Often I just ask it to write an AGENTS.md that encapsulates a good development workflow, git branch/commit policy, testing and quality standards, and ROADMAP.md plus per milestone markdown files with phases and task tracking, and this is enough.

I'm somewhat interested in these more involved harnesses that automated or enforce more, but I don't know that they'd give me much that I don't have and I think they'd be tough to keep up with the state of the art compared to something less specific.

kamikazeturtles 11 hours ago | parent | prev [-]

> It doesn't perform on par with Anthropic's models in my experience.

Why do you think that is the case? Is Anthropic's models just better or do they train the models to somehow work better with the harness?

mmargenot 11 hours ago | parent | next [-]

It is more common now to improve models in agentic systems "in the loop" with reinforcement learning. Anthropic is [very likely] doing this in the backend to systematically improve the performance of their models specifically with their tools. I've done this with Goose at Block with more classic post-training approaches because it was before RL really hit the mainstream as an approach for this.

If you want to look at some of the tooling and process for this, check out verifiers (https://github.com/PrimeIntellect-ai/verifiers), hermes (https://github.com/nousresearch/hermes-agent) and accompanying trace datasets (https://huggingface.co/datasets/kai-os/carnice-glm5-hermes-t...), and other open source tools and harnesses.

mmargenot 7 hours ago | parent [-]

Here’s an explicit example of the above from today using the above dataset: https://x.com/kaiostephens/status/2040396678176362540?s=46

MrScruff 11 hours ago | parent | prev | next [-]

It's a good question, I've wondered that myself. I haven't used GLM-5 with CC but I've used GLM-4.7 a fair amount, often swapping back and forth with Sonnet/Opus. The difference is fairly obvious - on occasions I've mistakenly left GLM enabled running when I thought I was using Sonnet, and could tell pretty quickly just based on the gap in problem solving ability.

esafak 11 hours ago | parent | prev [-]

They're just dumber. I've used plenty of models. The harness is not nearly as important.

vidarh 10 hours ago | parent [-]

The harness if anything matters more with those other models because of how much dumber they are... You can compensate for some of the stupidity (but by no means all) with harnesses that tries to compensate in ways that e.g. Claude Code does not because it isn't necessary to do so for Anthropics own models.