Remix.run Logo
threepts 8 hours ago

Why don't they ask their premier model to generate a bench for them?

Jokes aside, a benchmark I look forward to is ARC-AGI-3. I tried out their human simulation, and it feels very reasoning heavy.

Leaderboard: https://arcprize.org/leaderboard

(Most premier models don't even pass 5 percent.)

falcor84 8 hours ago | parent | next [-]

They focus on minimizing the number of moves and don't allow any harness whatsoever, putting the bar extremely high. The current top verified contender (Claude Opus 4.6) is at only 0.45%. But with how new it is, I expect a lot of improvement in the next generation of models.

threepts 7 hours ago | parent [-]

Optimal for judging actual reasoning ability rather than an LLM's ability to regurgitate knowledge from a necropost on HN/Reddit/Twitter from 2018.

jjmarr 2 hours ago | parent | next [-]

I'm making an LLM agent that can play DS games. The biggest blocker is clicking on the right spot to move things around in space rather than reasoning abilities.

Arc AGI seems to test that as well. Every game is a rectangular grid to make it as easy as possible yet the AIs still fail.

I'm fairly certain the way forward isn't through agents directly interfacing with UIs but through agents using scripts and other tools to interact with the interface. That's why harnesses are so critical to performance on tasks like this.

I would like a version of Arc AGI that tests the agent's ability to dynamically create these harnesses.

knollimar 7 hours ago | parent | prev [-]

a small harness that stores text files and manages context could be useful, otherwise you lose all ability to measure that skill (and that's important because it represents real world use cases on large code bases)

sowbug 7 hours ago | parent | prev | next [-]

Why don't they ask their premier model to generate a bench for them?

It's not a crazy idea. Have the older model interview the newer one and then ask both (or maybe a third referee model) which one they think is smarter. Repeat 100x with different seeds. The percentage of times both sides agree the newer model won is the score.

alansaber 8 hours ago | parent | prev | next [-]

Very (reasoning) heavy benchmarks do seem like the way to go, being the hardest to game.

xtracto 7 hours ago | parent | prev | next [-]

Can AI write a problem so difficult that even AI cannot solve?

Hehe

ngruhn 4 hours ago | parent [-]

How about prime factorization

therealdrag0 7 hours ago | parent | prev [-]

[dead]