Remix.run Logo
slacktivism123 5 days ago

Fascinating case showing how LLM promoters will happily take "verified" benchmarks at their word.

It's easy to publish "$NEWMODEL received an X% bump in SWE-Bench Verified!!!!".

Proper research means interrogating the traces, like these researchers did (the Gist shows Claude 4 Sonnet): https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...

Commentary: https://x.com/bwasti/status/1963288443452051582, https://x.com/tmkadamcz/status/1963996138044096969

Workaccount2 5 days ago | parent | next [-]

The best benchmark is the community vibe in the weeks following a release.

Claude benchmarks poorly but vibes well. Gemini benchmarks well and vibes well. Grok benchmarks well but vibes poorly.

(yes I know you are gushing with anecdotes, the vibes are simply the approximate color of gray born from the countless black and white remarks.)

diggan 4 days ago | parent | next [-]

> The best benchmark is the community vibe in the weeks following a release.

True, just be careful what community you use as a vibe-check. Most of the mainstream/big ones around AI and LLMs basically have influence campaigns run against them, are made of giant hive-minds that all think alike and you need to carefully asses if anything you're reading is true or not, and votes tend to make it even worse.

theblazehen 4 days ago | parent [-]

I generally check LM Arena as well as which models have had the most weekly tokens on openrouter

wubrr 5 days ago | parent | prev [-]

the vibes are just a collection anecdotes

ryoshu 5 days ago | parent [-]

"qual"

k__ 5 days ago | parent | prev [-]

Yes, often you see huge gains in some benchmark, then the model is ran through Aider's polyglot benchmark and doesn't even hit 60%.