▲ | allemagne a day ago | |||||||
Yeah this is the issue with the prompt, it also slips up humans who gloss over "cousin". I'm assuming that pointing this out leads you the human to reread the prompt and then go "ah ok" and adjust the way you're thinking about it. ChatGPT (and DeepSeek at least) will usually just double and triple down and repeat "this challenges gender assumptions" over and over. | ||||||||
▲ | roughly a day ago | parent | next [-] | |||||||
This is a good one. I just argued with Gemini 2.5pro for a few minutes on this and it absolutely would not yield. | ||||||||
▲ | a day ago | parent | prev | next [-] | |||||||
[deleted] | ||||||||
▲ | a day ago | parent | prev | next [-] | |||||||
[deleted] | ||||||||
▲ | crazygringo a day ago | parent | prev [-] | |||||||
Oh, I see. I mean yeah, I totally missed that too. I mean, when we see something we recognize at a glance, we don't usually double-check if it's exactly correct, because we assume the speaker isn't trying to trick us. I make typos in ChatGPT prompts all the time, and I'm consistently impressed because it answers what I actually meant to ask. So to me, this feels like a feature not a bug. I followed up with: > So the first five words are, "a man and his cousin" -- not his son. Does that change the riddle? And got back: > Great observation — yes, that wording does make the riddle more subtle. Let’s break it down: • “A man and his cousin are in a car crash.” • “The man dies, but the cousin is taken to the emergency room.” • At the OR, the surgeon says: “I cannot operate on him. He’s my son.” So if the cousin is the one taken to the OR, and the surgeon says he’s my son, then the cousin is the surgeon’s son. Seems fine to me? | ||||||||
|