Remix.run Logo
postalcoder 6 hours ago

I made pelicans at different thinking efforts:

https://hcker.news/pelican-low.svg

https://hcker.news/pelican-medium.svg

https://hcker.news/pelican-high.svg

https://hcker.news/pelican-xhigh.svg

Someone needs to make a pelican arena, I have no idea if these are considered good or not.

seanw444 6 hours ago | parent | next [-]

Can someone explain how we arrived at the pelican test? Was there some actual theory behind why it's difficult to produce? Or did someone just think it up, discover it was consistently difficult, and now we just all know it's a good test?

simonw 6 hours ago | parent | next [-]

I set it up as a joke, to make fun of all of the other benchmarks. To my surprise it ended up being a surprisingly good measure of the quality of the model for other tasks (up to a certain point at least), though I've never seen a convincing argument as to why.

I gave a talk about it last year: https://simonwillison.net/2025/Jun/6/six-months-in-llms/

It should not be treated as a serious benchmark.

jimbokun 5 hours ago | parent [-]

What it has going for it is human interpretability.

Anyone can look and decide if it’s a good picture or not. But the numeric benchmarks don’t tell you much if you aren’t already familiar with that benchmark and how it’s constructed.

redox99 6 hours ago | parent | prev | next [-]

It all began with a Microsoft researcher showing a unicorn drawn in tikz using GPT4. It was an example of something so outrageous that there was no way it existed in the training data. And that's back when models were not multimodal.

Nowadays I think it's pretty silly, because there's surely SVG drawing training data and some effort from the researchers put onto this task. It's not a showcase of emergent properties.

Gander5739 6 hours ago | parent | prev | next [-]

https://simonwillison.net/2025/Jun/6/six-months-in-llms/

CamperBob2 6 hours ago | parent | prev [-]

It's interesting to see some semblance of spatial reasoning emerge from systems based on textual tokens. Could be seen as a potential proxy for other desirable traits.

It's meta-interesting that few if any models actually seem to be training on it. Same with other stereotypical challenges like the car-wash question, which is still sometimes failed by high-end models.

If I ran an AI lab, I'd take it as a personal affront if my model emitted a malformed pelican or advised walking to a car wash. Heads would roll.

deflator 6 hours ago | parent | prev | next [-]

They are not good, and they seem to get worse as you increased effort. Weird

postalcoder 6 hours ago | parent | next [-]

Yeah. I've always loosely correlated pelican quality with big model smell but I'm not picking that up here. I thought this was supposed to be spud? Weird indeed.

throw310822 6 hours ago | parent | prev [-]

No but I can sense the movement, I think it's already reached the level of intelligence that draws it towards futurism or cubism /s

lexarflash8g 3 hours ago | parent | prev | next [-]

None of them have the pelican's feet placed properly on the pedals -- or the pedals are misrepresented. Cool art style but not physically accurate.

bravoetch 5 hours ago | parent | prev | next [-]

I tried getting it to generate openscad models, which seems much harder. Not had much joy yet with results.

lostmsu an hour ago | parent | prev [-]

https://pelicans.borg.games/