Remix.run Logo
simianwords a day ago

“ In many reasoning-heavy benchmarks, o1 rivals the performance of human experts. Recent frontier models1 do so well on MATH2 and GSM8K that these benchmarks are no longer effective at differentiating models. We evaluated math performance on AIME, an exam designed to challenge the brightest high school math students in America.”

https://openai.com/index/learning-to-reason-with-llms/

The benchmark was so saturated that they didn’t even bother running it on the newer models.

Which is interesting because it shows the rapid progress LLMs are making.

I’m also making a bigger claim - you can’t get gpt-5 thinking to make a mistake in undergraduate level maths. At least it would be comparable in performance to a good student.