Remix.run Logo
15× vs. ~1.37×: Recalculating GPT-5.3-Codex-Spark on SWE-Bench Pro(twitter.com)
27 points by nvanlandschoot a day ago | 15 comments
solarkraft a day ago | parent | next [-]

> The narrative from AI companies hasn’t really changed, but the reaction has. The same claims get repeated so often that they start to feel like baseline reality, and people begin to assume the models are far more capable than they actually are.

This has been the case for people who buy into hype and don’t actually use the products, but I’m pretty sure people who do are pretty disillusioned by all the claims. The only somewhat reliable method is to test the things for your own use case.

That said: I always expected the tradeoff of Spark to be accuracy vs. speed. That it’s still significantly faster at the same accuracy is wild. I never expected that.

roxolotl 29 minutes ago | parent | next [-]

The people I know that use them the most also seem the most likely to buy into hype. The coworker who no longer answers questions by talking about code but instead by talking about which skills are the best is the same who posts all the hype.

ijidak an hour ago | parent | prev [-]

I believe a lot of the speed-up is due to a new chip they use [1] so the fact that the speedup didn't reduce the number of operations is likely why the accuracy has changed little.

1. https://www.cerebras.ai/blog/openai-codexspark

nearbuy 28 minutes ago | parent | prev | next [-]

Unless I'm missing it, the page they're referring to (https://openai.com/index/introducing-gpt-5-3-codex-spark/) never claims Spark is 15x faster.

It looks like it only appears in the snippet the Google result shows, presumably taken from the meta tags. It's possible an earlier draft claimed a 15x speed boost and they forgot to remove the claim from the tags.

vessenes 38 minutes ago | parent | prev | next [-]

This is the best sort of correct, in that it’s technically correct. The thing is, We don’t need 5.3 xxhigh reasoning for everything. Giving up some intelligence, and then taking the hit on some inevitable re-runs / re-prompts at 15x ends up with, I bet, more than 37% speed improvement on a lot of tasks.

There’s two ways to run this, and I’m curious which is better (time or quality, either would be interesting) - you could run 5.3xxhigh as the coordinator, spinning up some eager beaver coders that need wrangling, or you could run spark as the coordinator and probably code drafter - where it runs into trouble it could farm out to the big brains.

Now that I think about it, corporations use both models as well. It would be nice for the user if fast coordinator worked well; that lowers turns and ultimately could let you stay in the zone while pairing with a coding agent. But I really don’t know which is better.

nvanlandschoot a day ago | parent | prev | next [-]

Method: I used OpenAI’s published SWE-Bench Pro chart points and matched GPT-5.3-Codex-Spark to the baseline model at comparable accuracy levels by reasoning effort. At similar accuracy, the effective speedup is closer to ~1.37× rather than 15×.

charcircuit an hour ago | parent | prev | next [-]

>The fair comparison is where the models are basically equivalent in intelligence

I don't agree with this premise. I think it is fair to say that Haiku is a faster model than Opus.

an hour ago | parent | prev | next [-]
[deleted]
pennaMan an hour ago | parent | prev | next [-]

efficiency per token has tanked but it's still faster. given this is the first generation for Cerberas hardware this is the worst it's ever going to be.

when it reaches the main 5.3 codex efficiency at this token rate this kind of articles will seem silly in retrospect

an hour ago | parent [-]
[deleted]
jiggawatts an hour ago | parent | prev | next [-]

Something I find odd in the AI space is that almost all journalists republish vendor benchmark claims without question.

Why not just benchmark the models yourself?

Tiny little YouTube channels will spend weeks benchmarking every motherboard from every manufacturer to detect even the tiniest differences!

Car reviews will often test drive the cars and run their own dyno tests.

Etc…

AI reviews meanwhile are just copy-paste from the market blurb.

CamouflagedKiwi 29 minutes ago | parent | next [-]

It's not free to run those benchmarks, especially on the big models.

Ideally journalists / their employers would swallow that as the cost of business, but it's a hard sell if they are feeling the squeeze and aren't making much in the first place.

coldtea an hour ago | parent | prev | next [-]

>Why not just benchmark the models yourself?

Because their incentives are to churn stupid articles fast to get more views, and to be on major AI companies and potential advertisers' good graces. That, and their integrity and passion for what they do is minimal, plus they're paid peanuts.

Doesn't help that most brain-rotted readers are hardly calling them out for it, if they even notice it.

latchkey an hour ago | parent | prev [-]

Even the 3rd party AI benchmarks that are published [0], are all sham too. It is run by a paid shill (semianalysis) and all highly tuned by the vendors to make themselves look good.

[0] https://github.com/InferenceMAX/InferenceMAX/

pplar39 a day ago | parent | prev [-]

[dead]