Remix.run Logo
Tiberium 7 hours ago

I highly doubt some of those results, GPT 5.2/+codex is incredible for cyber security and CTFs, and 5.3 Codex (not on API yet) even moreso. There is absolutely no way it's below Deepseek or Haiku. Seems like a harness issue, or they tested those models at none/low reasoning?

jakozaur 7 hours ago | parent | next [-]

As I do eval and training data sets for living, in niche skills, you can find plenty of surprises.

The code is open-source; you can run it yourself using Harbor Framework:

git clone git@github.com:QuesmaOrg/BinaryAudit.git

export OPENROUTER_API_KEY=...

harbor run --path tasks --task-name lighttpd-* --agent terminus-2 --model openrouter/anthropic/claude-opus-4.6 --model openrouter/google/gemini-3-pro-preview --model openrouter/openai/gpt-5.2 --n-attempts 3

Please open PR if you find something interesting, though our domain experts spend fair amount of time looking at trajectories.

Tiberium 6 hours ago | parent | next [-]

Just for fun, I ran dnsmasq-backdoor-detect-printf (which has a 0% pass rate in your leaderboard with GPT models) with --agent codex instead of terminus-2 with gpt-5.2-codex and it identified the backdoor successfully on the first try. I honestly think it's a harness issue, could you re-run the benchmarks with Codex for gpt-5.2-codex and gpt-5.2?

Tiberium 6 hours ago | parent | prev [-]

Are the existing trajectories from your runs published anywhere? Or is the only way is for me to run them again?

jakozaur 6 hours ago | parent [-]

I can provide trajectories. Though probably we are not going to publish them this time. This would need some extra safeguards.

Email me. The address is in profile.

stared 2 hours ago | parent | prev [-]

To be honest, it is also our surprise. I mean, I used GPT 5.2 Codex in Cursor for decompiling an old game and it worked (way better than Claude Code with Opus 4.5). We tested for Opus 4.6, but waiting for public API to test on GPT 5.3 Codex.

At the same time, various task can be different, and now all things that work the best end-to-end are the same as ones that are good for a typical, interactive workflow.

We used Terminus 2 agent, as it is the default used by Harbor (https://harborframework.com/), as we want to be unbiased. Very likely other frameworks will change the result.