| ▲ | 1024core 3 hours ago |
| It got the car wash question perfectly: You are definitely going to have to drive it there—unless you want to put it in neutral and push! While 200 feet is a very short and easy walk, if you walk over there without your car, you won't have anything to wash once you arrive. The car needs to make the trip with you so it can get the soap and water. Since it's basically right next door, it'll be the shortest drive of your life. Start it up, roll on over, and get it sparkling clean. Would you like me to check the local weather forecast to make sure it's not going to rain right after you wash it? |
|
| ▲ | 1024core 3 hours ago | parent | next [-] |
| Some people are suggesting that this might actually be in the training set. Since I can't rule that out, I tried a different version of the question, with an elephant instead of a car: > It's a hot and dusty day in Arizona and I need to wash my elephant. There's a creek 300 feet away. Should I ride my elephant there or should I just walk there by myself? Gemini said: That sounds like quite the dusty predicament! Given the options, you should absolutely ride (or lead) your elephant to the creek. Here is the straightforward reality of the situation: The Logistical Nightmare of Walking Alone: If you walk to the creek by yourself, your elephant stays at your starting point. Elephants require a massive amount of water for a bath. Unless you are planning an extreme workout carrying hundreds of buckets of water 300 feet back through the Arizona heat, you're going to have a very bad time—and the elephant will stay dusty.
The Perks of Going Together: 300 feet is a very short, easy stroll for an elephant. Plus, elephants generally love water and mud! Letting your elephant get right into the creek will make the washing process much easier for you and much more enjoyable for them. They can even use their trunk to help hose themselves down.
Grab some elephant-safe soap and head down to the water together!Would you like some tips on the best methods and tools for scrubbing down an elephant? |
| |
| ▲ | kovek 2 hours ago | parent | next [-] | | I think that semantically this question is too similar to the car wash one. Changing subjects from car to elephant and car wash to creek does not change the fact that they are subjects. The embeddings will be similar in that dimension. | | |
| ▲ | 1024core 2 hours ago | parent [-] | | I understand. But isn't it a sign of "smarts" that one can generalize from analoguous tasks? | | |
| ▲ | willis936 2 hours ago | parent [-] | | Sure it is, but it's a different set of smarts than the kind of gotcha logic puzzle trying to be tested with the car wash question. |
|
| |
| ▲ | proto-n an hour ago | parent | prev [-] | | From Gemini pro: You should definitely ride the elephant (or at least lead it there)! Here is the logic: If you walk there by yourself, you will arrive at the creek, but the dirty elephant will still be 300 feet back where you started. You can't wash the elephant if it isn't with you! Plus, it is much easier to take the elephant to the water than it is to carry enough buckets of water 300 feet back to the elephant. Would you like another riddle, or perhaps some actual tips on how to keep cool in the Arizona heat? |
|
|
| ▲ | embedding-shape 3 hours ago | parent | prev | next [-] |
| GPT-OSS-120b, a (downloadable) model released more than half a year ago also gets that right, I'm not sure this is such a great success. > Would you like me to check the local weather forecast to make sure it's not going to rain right after you wash it? Regardless of what I do, the days I decide to wash my car, it ALWAYS rains the day after, even if I look at the weather report and it says sunny. Cute that Gemini thinks it can predict it better than universe's will. |
|
| ▲ | steve1977 3 hours ago | parent | prev | next [-] |
| The question of course is, did it get the car wash question right because it is "the car wash question" or because it could actually infer why the car needed to be there? |
| |
| ▲ | embedding-shape 3 hours ago | parent | next [-] | | Wasn't that "twoot" (or whatever Mastodon calls them) made just a week ago? Unlikely to have been in the training dataset of a model becoming available for public use today, unless Google made some serious advancements on the training front. | |
| ▲ | jama211 3 hours ago | parent | prev [-] | | Shouldn’t be too hard to come up with a new unique reasoning question |
|
|
| ▲ | leumon 35 minutes ago | parent | prev | next [-] |
| Gemini 3 pro and flash already answered this correctly. |
|
| ▲ | buttered_toast 2 hours ago | parent | prev | next [-] |
| I think we need to reevaluate what purpose these sorts of questions serve and why they're important in regards to judging intelligence. The model getting it correct or not at any given instance isn't the point, the point is if the model ever gets it wrong we can still assume that it still has some semblance of stochasticity in its output, given that a model is essentially static once it is released. Additionally, hey don't learn post training (except for in context which I think counts as learning to some degree albeit transient), if hypothetically it answers incorrectly 1 in 50 attempts, and I explain in that 1 failed attempt why it is wrong, it will still be a 1-50 chance it gets it wrong in a new instance. This differs from humans, say for example I give an average person the "what do you put in a toaster" trick and they fall for it, I can be pretty confident that if I try that trick again 10 years later they will probably not fall for it, you can't really say that for a given model. |
| |
| ▲ | energy123 2 hours ago | parent [-] | | They're important but not as N=1. It's like cherry picking a single question from SimpleQA and going aha! It got it right! Meanwhile it's 8% lower score than some other model when evaluated on all questions. | | |
| ▲ | buttered_toast 2 hours ago | parent [-] | | Makes me wonder what people would consider better, a model that gets 92% of questions right 100% of the time, or a model that gets 95% of the questions right 90% of the time and 88% right the other 10%? I think that's why benchmarking is so hard for me to fully get behind, even if we do it over say, 20 attempts and average it. For a given model, those 20 attempts could have had 5 incredible outcomes and 15 mediocre ones, whereas another model could have 20 consistently decent attempts and the average score would be generally the same. We at least see variance in public benchmarks, but in the internal examples that's almost never the case. |
|
|
|
| ▲ | upmind 3 hours ago | parent | prev | next [-] |
| The answer here is why I dislike Gemini, though it gets the correct answer, it's far too verbose. |
| |
| ▲ | mikelward an hour ago | parent [-] | | I don't love the verbosity of any of the chatbots when I'm using my phone, but at least it put the answer/tl;dr in the first paragraph. |
|
|
| ▲ | patrickmcnamara 2 hours ago | parent | prev | next [-] |
| Truly we entering the era of AGI. |
|
| ▲ | suddenlybananas 3 hours ago | parent | prev [-] |
| They probably had time to toss that example in the training soup. |
| |
| ▲ | AlphaAndOmega0 3 hours ago | parent [-] | | Previous models from competitors usually got that correct, and the reasoning versions almost always did. This kind of reflexive criticism isn't helpful, it's closer to a fully generalized counter-argument against LLM progress, whereas it's obvious to anyone that models today can do things they couldn't do six months ago, let alone 2 years back. | | |
| ▲ | suddenlybananas 3 hours ago | parent [-] | | I'm not denying any progress, I'm saying that reasoning failures that are simple which have gone viral are exactly the kind of thing that they will toss in the training data. Why wouldn't they? There's real reputational risks in not fixing it and no costs in fixing it. |
|
|