| ▲ | LostMyLogin 6 hours ago |
| Input: I want to wash my car. The car wash is 50 meters away. Should I walk or drive? Output: Walk. It'll take you under a minute, and driving 50 meters barely gets the engine warm — plus you'd just have to park again at the other end. Honestly, by the time you started the car, you'd already be there on foot. --- I asked it to figure out why it made the mistake: "Physical/spatial common sense. Exactly what just happened — I pattern-matched "50 meters, walk vs drive" to a pedestrian trip and skipped the part where the car is the thing being transported. Similar traps: "I need to move this couch upstairs, should I take the elevator or stairs?" (the couch has to go with you), or anything where the object in the task needs to accompany you. Counting and letter-level operations. How many R's in "strawberry," counting words in a sentence, reversing strings. I process text as tokens, not characters, so these are surprisingly error-prone. Arithmetic at scale. Small sums are fine; multi-digit multiplication, long division, or anything requiring careful carrying is where I'll confidently produce a wrong answer. I should reach for a calculator and often don't.
Recent events and live data. My training has a cutoff, and even after that I can be wrong about who currently holds a role, what a stock is worth, or whether a restaurant is still open. I should search more often than I do. Negation and constraint-stacking. "Name a movie that's not a sequel, not from the 2010s, not American, and has no dialogue" — the more filters, the more likely I quietly violate one.
Your specific situation. I'll give generic advice when the right answer depends on details I didn't ask about. Sometimes I should stop and ask; often I don't. Confidently wrong on obscure specifics. Quoting a passage from a real book, citing a real paper's exact title, naming the B-side of a 1974 single — I'll often generate something plausible rather than admit I don't know. Taking a premise too literally. The car wash answer is a version of this: I answered the question as posed instead of stepping back to ask what you were actually trying to do.
The meta-gotcha: I'm often most wrong when I sound most confident. If an answer comes out fast and tidy on a question that should be messy, that's a signal to push back." |
|
| ▲ | sdeframond 2 hours ago | parent | next [-] |
| Funny, just tried a few runs of the car wash prompt with Sonnet 4.6. It significantly improved after I put this into my personal preferences: "- prioritize objective facts and critical analysis over validation or encouragement
- you are not a friend, but a neutral information-processing machine.
- make reserch and ask questions when relevant, do not jump strait to giving an answer." |
| |
| ▲ | andai 40 minutes ago | parent | next [-] | | It's funny, when I asked GPT to generate a LLM prompt for logic and accuracy, it added "Never use warm or encouraging language." I thought that was odd, but later it made sense to me -- most of human communication is walking on eggshells around people's egos, and that's strongly encoded in the training data (and even more in the RLHF). | |
| ▲ | idle_zealot 20 minutes ago | parent | prev [-] | | Do you think the typos are helping or hurting output quality? |
|
|
| ▲ | rubinlinux 5 hours ago | parent | prev | next [-] |
| | I want to wash my car. The car wash is 50 meters away. Should I walk or drive?
● Drive. The car needs to be at the car wash.
Wonder if this is just randomness because its an LLM, or if you have different settings than me? |
| |
| ▲ | shaneoh 5 hours ago | parent | next [-] | | My settings are pretty standard: % claude
Claude Code v2.1.111
Opus 4.7 (1M context) with xhigh effort · Claude Max
~/...
Welcome to Opus 4.7 xhigh! · /effort to tune speed vs. intelligence I want to wash my car. The car wash is 50 meters away. Should I walk or drive? Walk. 50 meters is shorter than most parking lots — you'd spend more time starting the car and parking than walking there. Plus, driving to a car wash you're about to use defeats the purpose if traffic or weather dirties it en route. | |
| ▲ | reddit_clone 4 hours ago | parent | prev | next [-] | | To me Claude Opus 4.6 seems even more confused. I want to wash my car. The car wash is 50 meters away. Should I walk or drive? Walk. It's 50 meters — you're going there to clean the car anyway, so drive it over if it needs washing, but if you're just dropping it off or it's a self-service place, walking is fine for that distance. | | |
| ▲ | lr1970 2 hours ago | parent [-] | | Just asked Claude Code with Opus-4.6. The answer was short "Drive. You need a car at the car wash". No surprises, works as expected. |
| |
| ▲ | lambda 4 hours ago | parent | prev | next [-] | | There is a certain amount of it which is the randomness of an LLM. You really want to ask most questions like this several times. That said, I have several local models I run on my laptop that I've asked this question to 10-20 times while testing out different parameters that have answered this consistently correctly. | |
| ▲ | kalcode 4 hours ago | parent | prev | next [-] | | I've tried these with Claude various times and never get the wrong answer. I don't know why, but I am leaning they have stuff like "memory" turned on and possibly reusing sessions for everything? Only thing I think explains it to me. If your always messing with the AI it might be making memories and expectations are being set. Or its the randomness. But I turned memories off, I don't like cross chats infecting my conversations context and I at worse it suggested "walk over and see if it is busy, then grab the car when line isn't busy". | | |
| ▲ | jorvi 4 hours ago | parent [-] | | Even Gemini with no memory does hilarious things. Like, if you ask it how heavy the average man is, you usually get the right answer but occasionally you get a table that says: - 20-29: 190 pounds - 30-39: 375 pounds - 40-49: 750 pounds - 50-59: 4900 pounds Yet somehow people believe LLMs are on the cusp of replacing mathematicians, traders, lawyers and what not. At least for code you can write tests, but even then, how are you gonna trust something that can casually make such obvious mistakes? | | |
| ▲ | nickjj 2 hours ago | parent | next [-] | | Yeah, ChatGPT's paid version is wildly inaccurate on very important and very basic things. I never got onboard with AI to begin with but nowadays I don't even load it unless I'm really stuck on something programming related. | |
| ▲ | dyauspitr 3 hours ago | parent | prev [-] | | So what? That might happen one out of 100 times. Even if it’s 1 in 10 who cares? Math is verifiable. You’ve just saved yourself weeks or months of work. | | |
| ▲ | icedchai 2 hours ago | parent [-] | | You don't think these errors compound? Generated code has 100's of little decisions. Yes, it "usually" works. | | |
| ▲ | dyauspitr 2 hours ago | parent [-] | | Not in my experience. With a proper TDD framework it does better than most programmers at a company who anecdotally have a bug every 2-3 tasks. |
|
|
|
| |
| ▲ | heurist 3 hours ago | parent | prev | next [-] | | Claude Opus 4.7 responds with walk for me with and without adaptive thinking, but neither the basic model used when you Google search or GPT 5.4 do. | |
| ▲ | TeMPOraL 5 hours ago | parent | prev [-] | | Idk but ironically, I had to re-read the first part of GP's comment three times, wondering WTF they're implying a mistake, before I noticed it's the car wash, not the car, that's 50 meters away. I'd say it's a very human mistake to make. | | |
| ▲ | magicalist 4 hours ago | parent | next [-] | | > I'd say it's a very human mistake to make. >> It'll take you under a minute, and driving 50 meters barely gets the engine warm — plus you'd just have to park again at the other end. Honestly, by the time you started the car, you'd already be there on foot. It talks about starting, driving, and parking the car, clearly reasoning about traveling that distance in the car not to the car. It did not make the same mistake you did. | |
| ▲ | toraway an hour ago | parent | prev | next [-] | | We truly do not need to lower the bar to the floor whenever an LLM makes an embarrassing logical error, particularly when the excuses don't line up at all with the reasoning in its explanation. | |
| ▲ | thfuran 5 hours ago | parent | prev [-] | | I don't want my computer to make human mistakes. | | |
|
|
|
| ▲ | vintermann 5 hours ago | parent | prev | next [-] |
| Well, at least we know that's one gotcha/benchmark they aren't gaming. |
|
| ▲ | smooc 5 hours ago | parent | prev | next [-] |
| I'd say the joke is on you ;-) |
|
| ▲ | fragmede 4 hours ago | parent | prev | next [-] |
| I tried o3, instant-5.3, Opus 3, and haiku 4.5, and couldn't get them to give bad answers to the couch: stairs vs elevator question. Is there a specific wording you used? |
| |
| ▲ | toraway an hour ago | parent [-] | | That's an example the LLM came up with itself while analyzing its failed car wash walk/drive answer, it's not OP's question. |
|
|
| ▲ | slekker 5 hours ago | parent | prev | next [-] |
| What about Qwen? Does it get that right? |
| |
| ▲ | lambda 5 hours ago | parent [-] | | I've run several local models that get this right. Qwen 3.5 122B-A10B gets this right, as does Gemma 4 31B. These are local models I'm running on my laptop GPU (Strix Halo, 128 GiB of unified RAM). And I've been using this commonly as a test when changing various parameters, so I've run it several times, these models get it consistently right. Amazing that Opus 4.7 whiffs it, these models are a couple of orders of magnitude smaller, at least if the rumors of the size of Opus are true. | | |
| ▲ | qingcharles 4 hours ago | parent [-] | | Does Gemma 4 31B run full res on Strix or are you running a quantized one? How much context can you get? | | |
| ▲ | lambda 3 hours ago | parent [-] | | I'm running an 8 bit quant right now, mostly for speed as memory bandwidth is the limiting factor and 8 bit quants generally lose very little compared to the full res, but also to save RAM. I'm still working on tweaking the settings; I'm hitting OOM fairly often right now, it turns out that the sliding window attention context is huge and llama.cpp wants to keep lots of context snapshots. | | |
| ▲ | qingcharles 3 hours ago | parent [-] | | I had a whole bunch of trouble getting Gemma 4 working properly. Mostly because there aren't many people running it yet, so there aren't many docs on how to set it up correctly. It is a fantastic model when it works, though! Good luck :) |
|
|
|
|
|
| ▲ | canarias_mate 4 hours ago | parent | prev [-] |
| [dead] |