Remix.run Logo
CSMastermind 2 days ago

Okay, I decided to benchmark a bunch of AI models with geoguessr. One round each on diverse world, here's how they did out of 25,000:

Claude 3.7 Sonnet: 22,759

Qwen2.5-Max: 22,666

o3-mini-high: 22,159

Gemini 2.5 Pro: 18,479

Llama 4 Maverick: 14,316

mistral-large-latest: 10,405

Grok 3: 5,218

Deepseek R1: 0

command-a-03-2025: 0

Nova Pro: 0

nemo1618 2 days ago | parent | next [-]

Neat, thanks for doing this!

msephton 2 days ago | parent | prev | next [-]

How does Google Lens compare?

CSMastermind 2 days ago | parent [-]

I tried it but as far as I can tell Google Lens doesn't give you a location - it just describes generally what you're looking at.

bn-l 2 days ago | parent | prev [-]

What about 04-mini-high ?

CSMastermind 2 days ago | parent [-]

OpenAI's naming confuses me but I ran o4-mini-2025-04-16 through a game and it got 23,885

bn-l a day ago | parent [-]

Interesting. It supports what they said (this is the model with good visual reasoning)