| ▲ | simonw 6 hours ago |
| The bicycle frame is a bit wonky but the pelican itself is great: https://gist.github.com/simonw/a6806ce41b4c721e240a4548ecdbe... |
|
| ▲ | stkai 5 hours ago | parent | next [-] |
| Would love to find out they're overfitting for pelican drawings. |
| |
| ▲ | fdeage 42 minutes ago | parent | next [-] | | OpenAI claims not to:
https://x.com/aidan_mclau/status/1986255202132042164 | |
| ▲ | andy_ppp 5 hours ago | parent | prev | next [-] | | Yes, Racoon on a unicycle? Magpie on a pedalo? | | | |
| ▲ | theanonymousone 2 hours ago | parent | prev | next [-] | | Even if not intentionally, it is probably leaking into training sets. | |
| ▲ | fragmede 4 hours ago | parent | prev [-] | | The estimation I did 4 months ago: > there are approximately 200k common nouns in English, and then we square that, we get 40 billion combinations. At one second per, that's ~1200 years, but then if we parallelize it on a supercomputer that can do 100,000 per second that would only take 3 days. Given that ChatGPT was trained on all of the Internet and every book written, I'm not sure that still seems infeasible. https://news.ycombinator.com/item?id=45455786 | | |
| ▲ | eli 4 hours ago | parent | next [-] | | How would you generate a picture of Noun + Noun in the first place in order to train the LLM with what it would look like? What's happening during that 1 estimated second? | | | |
| ▲ | AnimalMuppet 3 hours ago | parent | prev [-] | | But you need to also include the number of prepositions. "A pelican on a bicycle" is not at all the same as "a pelican inside a bicycle". There are estimated to be 100 or so prepositions in English. That gets you to 4 trillion combinations. |
|
|
|
| ▲ | gcanyon 5 hours ago | parent | prev | next [-] |
| One aspect of this is that apparently most people can't draw a bicycle much better than this: they get the elements of the frame wrong, mess up the geometry, etc. |
| |
| ▲ | arionmiles 4 hours ago | parent | next [-] | | There's a research paper from the University of Liverpool, published in 2006 where researchers asked people to draw bicycles from memory and how people overestimate their understanding of basic things. It was a very fun and short read. It's called "The science of cycology: Failures to understand how everyday objects work" by Rebecca Lawson. https://link.springer.com/content/pdf/10.3758/bf03195929.pdf | | |
| ▲ | devilcius 2 hours ago | parent | next [-] | | There’s also a great art/design project about exactly this. Gianluca Gimini asked hundreds of people to draw a bicycle from memory, and most of them got the frame, proportions, or mechanics wrong.
https://www.gianlucagimini.it/portfolio-item/velocipedia/ | |
| ▲ | rcxdude 4 hours ago | parent | prev [-] | | A place I worked at used it as part of an interview question (it wasn't some pass/fail thing to get it 100% correct, and was partly a jumping off point to a different question). This was in a city where nearly everyone uses bicycles as everyday transportation. It was surprising how many supposedly mechanical-focused people who rode a bike everyday, even rode a bike to the interview, would draw a bike that would not work. | | |
| ▲ | gcanyon 2 hours ago | parent | next [-] | | I wish I had interviewed there. When I first read that people have a hard time with this I immediately sat down without looking at a reference and drew a bicycle. I could ace your interview. | |
| ▲ | throwuxiytayq 3 hours ago | parent | prev [-] | | This is why at my company in interviews we ask people to draw a CPU diagram. You'd be surprised how many supposedly-senior computer programmers would draw a processor that would not work. | | |
| ▲ | niobe 3 hours ago | parent | next [-] | | If I was asked that question in an interview to be a programmer I'd walk out. How many abstraction layers either side of your knowledge domain do you need to be an expert in? Further, being a good technologist of any kind is not about having arcane details at the tip of your frontal lobe, and a company worth working for would know that. | |
| ▲ | gedy 3 hours ago | parent | prev | next [-] | | That's reasonable in many cases, but I've had situations like this for senior UI and frontend positions, and they: don't ask UI or frontend questions. And ask their pet low level questions. Some even snort that it's softball to ask UI questions or "they use whatever". It's like, yeah no wonder your UI is shit and now you are hiring to clean it up. | |
| ▲ | rsc 2 hours ago | parent | prev [-] | | Raises hand. |
|
|
| |
| ▲ | gnatolf 5 hours ago | parent | prev | next [-] | | Absolutely. A technically correct bike is very hard to draw in SVG without going overboard in details | | | |
| ▲ | nateglims 4 hours ago | parent | prev | next [-] | | I just had an idea for an RLVR startup. | |
| ▲ | cyanydeez 5 hours ago | parent | prev [-] | | Yes, but obviously AGI will solve this by, _checks notes_ more TerraWatts! | | |
|
|
| ▲ | franze 3 hours ago | parent | prev | next [-] |
| here the animated version
https://claude.ai/public/artifacts/3db12520-eaea-4769-82be-7... |
| |
|
| ▲ | einrealist 6 hours ago | parent | prev | next [-] |
| They trained for it. That's the +0.1! |
|
| ▲ | etwigg 2 hours ago | parent | prev | next [-] |
| If we do get paperclipped, I hope it is of the "cycling pelican" variety. Thanks for your important contribution to alignment Simon! |
|
| ▲ | zahlman 3 hours ago | parent | prev | next [-] |
| Do you find that word choices like "generate" (as opposed to "create", "author", "write" etc.) influence the model's success? Also, is it bad that I almost immediately noticed that both of the pelican's legs are on the same side of the bicycle, but I had to look up an image on Wikipedia to confirm that they shouldn't have long necks? Also, have you tried iterating prompts on this test to see if you can get more realistic results? (How much does it help to make them look up reference images first?) |
| |
| ▲ | simonw an hour ago | parent [-] | | I've stuck with "Generate an SVG of a pelican riding a bicycle" because it's the same prompt I've been using for over a year now and I want results that are sort-of comparable to each other. I think when I first tried this I iterated a few times to get to something that reliably output SVG, but honestly I didn't keep the notes I should ahve. |
|
|
| ▲ | athrowaway3z 6 hours ago | parent | prev | next [-] |
| This benchmark inspired me to have codex/claude build a DnD battlemap tool with svg's. They got surprisingly far, but i did need to iterate a few times to have it build tools that would check for things like; dont put walls on roads or water. What I think might be the next obstacle is self-knowledge. The new agents seem to have picked up ever more vocabulary about their context and compaction, etc. As a next benchmark you could try having 1 agent and tell it to use a coding agent (via tmux) to build you a pelican. |
|
| ▲ | hoeoek 6 hours ago | parent | prev | next [-] |
| This really is my favorite benchmark |
|
| ▲ | eaf7e281 6 hours ago | parent | prev | next [-] |
| There's no way they actually work on training this. |
| |
| ▲ | margalabargala 6 hours ago | parent | next [-] | | I suspect they're training on this. I asked Opus 4.6 for a pelican riding a recumbent bicycle and got this. https://i.imgur.com/UvlEBs8.png | | |
| ▲ | WarmWash 5 hours ago | parent | next [-] | | It would be way way better if they were benchmaxxing this. The pelican in the image (both images) has arms. Pelicans don't have arms, and a pelican riding a bike would use it's wings. | | |
| ▲ | ryandrake 4 hours ago | parent | next [-] | | Having briefly worked in the 3D Graphics industry, I don't even remotely trust benchmarks anymore. The minute someone's benchmark performance becomes a part of the public's purchasing decision, companies will pull out every trick in the book--clean or dirty--to benchmaxx their product. Sometimes at the expense of actual real-world performance. | |
| ▲ | seanhunter 4 hours ago | parent | prev [-] | | Pelicans don’t ride bikes.
You can’t have scruples about whether or not the image of a pelican riding a bike has arms. | | |
| ▲ | jevinskie 4 hours ago | parent [-] | | Wouldn’t any decent bike-riding pelican have a bike tailored to pelicans and their wings? | | |
| ▲ | actsasbuffoon 3 hours ago | parent | next [-] | | Sure, that’s one solution. You could also Isle of Dr Moreau your way to a pelican that can use a regular bike. The sky is the limit when you have no scruples. | |
| ▲ | cinntaile 4 hours ago | parent | prev [-] | | Now that would be a smart chat agent. |
|
|
| |
| ▲ | mrandish 6 hours ago | parent | prev | next [-] | | Interesting that it seems better. Maybe something about adding a highly specific yet unusual qualifier focusing attention? | |
| ▲ | riffraff 4 hours ago | parent | prev [-] | | perhaps try a penny farthing? |
| |
| ▲ | KeplerBoy 6 hours ago | parent | prev | next [-] | | There is no way they are not training on this. | | | |
| ▲ | fragmede 4 hours ago | parent | prev [-] | | The people that work at Anthropic are aware of simonw and his test, and people aren't unthinking data-driven machines. How valid his test is or isn't, a better score on it is convincing. If it gets, say, 1,000 people to use Claude Code over Codex, how much would that be worth to Anthropic? $200 * 1,000 = $200k/month. I'm not saying they are, but to say that they aren't with such certainty, when money is on the line; unless you have some insider knowledge you'd like to share with the rest of the class, it seems like an questionable conclusion. |
|
|
| ▲ | beemboy 3 hours ago | parent | prev | next [-] |
| Isn't there a point at which it trains itself on these various outputs, or someone somewhere draws one and feeds it into the model so as to pass this benchmark? |
|
| ▲ | bityard 5 hours ago | parent | prev | next [-] |
| Well, the clouds are upside-down, so I don't think I can give it a pass. |
|
| ▲ | nine_k 4 hours ago | parent | prev | next [-] |
| I suppose the pelican must be now specifically trained for, since it's a well-known benchmark. |
|
| ▲ | 7777777phil 6 hours ago | parent | prev | next [-] |
| best pelican so far would you say? Or where does it rank in the pelican benchmark? |
| |
|
| ▲ | nubg 6 hours ago | parent | prev | next [-] |
| What about the Pelo2 benchmark? (the gray bird that is not gray) |
|
| ▲ | copilot_king_2 6 hours ago | parent | prev | next [-] |
| I'm firing all of my developers this afternoon. |
| |
|
| ▲ | 6thbit 4 hours ago | parent | prev | next [-] |
| do you have a gif? i need an evolving pelican gif |
|
| ▲ | risyachka 4 hours ago | parent | prev | next [-] |
| Pretty sure at this point they train it on pelicans |
|
| ▲ | ares623 6 hours ago | parent | prev | next [-] |
| Can it draw a different bird on a bike? |
| |
|
| ▲ | DetroitThrow 6 hours ago | parent | prev | next [-] |
| The ears on top are a cute touch |
|
| ▲ | iujasdkjfasf 3 hours ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | behnamoh 6 hours ago | parent | prev | next [-] |
| [flagged] |
| |
| ▲ | smokel 5 hours ago | parent | next [-] | | I'll bite. The benchmark is actually pretty good. It shows in an extremely comprehensible way how far LLMs have come. Someone not in the know has a hard time understanding what 65.4% means on "Terminal-Bench 2.0". Comparing some crappy pelicans on bicycles is a lot easier. | | |
| ▲ | blibble 3 hours ago | parent [-] | | it ceases to be a useful benchmark of general ability when you post it publicly for them to train against |
| |
| ▲ | quinnjh 5 hours ago | parent | prev [-] | | the field is advancing so fast it's hard to do real science as their will be a new SOTA by the time you're ready to publish results. i think this is a combination of that and people having a laugh. Would you mind sharing which benchmarks you think are useful measures for multimodal reasoning? | | |
| ▲ | techpression 3 hours ago | parent [-] | | A benchmark only tests what the benchmark is doing, the goal is to make that task correlate with actually valuable things. Graphic benchmarks is a good example, extremely hard to know what you will get in a game by looking at 3D Mark scores, it varies by a lot.
Making a SVG of a single thing doesn’t help much unless that applies to all SVG tasks. |
|
|
|
| ▲ | fullstackchris 3 hours ago | parent | prev [-] |
| [flagged] |
| |
| ▲ | dang 2 hours ago | parent [-] | | Personal attacks are not allowed on HN. No more of this, please. |
|