| ▲ | simonw 11 hours ago | |||||||||||||||||||||||||||||||
The fact that pelicans can't ride bicycles is pretty much the point of the benchmark! Asking an LLM to draw something that's physically impossible means it can't just "get it right" - seeing how different models (especially at different sizes) handle the problem is surprisingly interesting. Honestly though, the benchmark was originally meant to be a stupid joke. I only started taking it slightly more seriously about six months ago, when I noticed that the quality of the pelican drawings really did correspond quite closely to how generally good the underlying models were. If a model draws a really good picture of a pelican riding a bicycle there's a solid chance it will be great at all sorts of other things. I wish I could explain why that was! If you start here and scroll through and look at the progression of pelican on bicycle images it's honestly spooky how well they match the vibes of the models they represent: https://simonwillison.net/2025/Jun/6/six-months-in-llms/#ai-... So ever since then I've continue to get models to draw pelicans. I certainly wouldn't suggest anyone take serious decisions on model usage based on my stupid benchmark, but it's a fun first-day initial impression thing and it appears to be a useful signal for which models are worth diving into in more detail. | ||||||||||||||||||||||||||||||||
| ▲ | thatwasunusual 9 hours ago | parent [-] | |||||||||||||||||||||||||||||||
> If a model draws a really good picture of a pelican riding a bicycle there's a solid chance it will be great at all sorts of other things. Why? If I hired a worker that was really good at drawing pelicans riding a bike, it wouldn't tell me anything about his/her other qualities?! | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||