Remix.run Logo
blenderob 3 hours ago

Can someone explain how this would work?

> the answers are known to the authors of the questions but will remain encrypted for a short time.

Ok. But humans may be able to solve the problems too. What prevents Anthropic or OpenAI from hiring mathematicians, have them write the proof and pass it off as LLM written? I'm not saying that's what they'll do. But shouldn't the paper say something about how they're going to validate that this doesn't happen?

Honest question here. Not trying to start a flame here. Honestly confused how this is going to test what it wants to test. Or maybe I'm just plain confused. Someone help me understand this?

yorwba 3 hours ago | parent | next [-]

This is not a benchmark. They just want to give people the opportunity to try their hand at solving novel questions with AI and see what happens. If an AI company pulls a solution out of their hat that cannot be replicated with the products they make available to ordinary people, that's hardly worth bragging about and in any case it's not the point of the exercise.

YeGoblynQueenne an hour ago | parent | next [-]

Hey, sorry, totally out of context but I've always wanted to ask about the username. I keep reading it as "yoruba" in my mind. What does it mean, if I'm not being indiscreet?

fph 2 hours ago | parent | prev | next [-]

The authors mention that before publications they tested these questions on Gemini and GPT, so they have been available to the two biggest players already; they have a head start.

data_maan an hour ago | parent [-]

Looks like very sloppy research.

cocoto 2 hours ago | parent | prev [-]

They could solve the problems and train the next models with the answers, as such the future models could “solve” theses.

data_maan an hour ago | parent | prev | next [-]

Nothing prevents them, and they are already doing that. I work in this field and one can be sure that now, because of the notoriety this preprint got, the questions will be solved soon.

conformist 3 hours ago | parent | prev | next [-]

It's possible but unlikely given the short timeline, diverse questions that require multiple matheamticians, and low stakes. Also they've already run preliminary tests.

blenderob 3 hours ago | parent [-]

> It's possible but unlikely given the short timeline

Yep. "possible but unlikely" was my take too. As another person commented, this isn't really a benchmark, and as long as that's clear, it seems fair. My only fear is that some submissions may be AI-assisted rather than fully AI-generated, with crucial insights coming from experienced mathematicians. That's still a real achievement even if it's human + AI collaboration. But I fear that the nuance would be lost on news media and they'll publish news about the dawn of fully autonomous math reasoning.

iLoveOncall an hour ago | parent | prev [-]

That was exactly my first thought as well. All those exercises are pointless and people don't seem to understand it, it's baffling.

Even if it's not Anthropic or OpenAI paying for the solutions, maybe it'll be someone solving them "for fun" because the paper got popular and posting them online.

It's a futile exercise.