| ▲ | saberience 4 hours ago | |||||||||||||||||||||||||||||||||||||||||||
Arc-AGI (and Arc-AGI-2) is the most overhyped benchmark around though. It's completely misnamed. It should be called useless visual puzzle benchmark 2. It's a visual puzzle, making it way easier for humans than for models trained on text firstly. Secondly, it's not really that obvious or easy for humans to solve themselves! So the idea that if an AI can solve "Arc-AGI" or "Arc-AGI-2" it's super smart or even "AGI" is frankly ridiculous. It's a puzzle that means nothing basically, other than the models can now solve "Arc-AGI" | ||||||||||||||||||||||||||||||||||||||||||||
| ▲ | CuriouslyC 4 hours ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||
The puzzles are calibrated for human solve rates, but otherwise I agree. | ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||