Remix.run Logo
crazygringo 21 hours ago

But this is going to be in every AI's training set. I just fed ChatGPT your exact prompt and it gave back exactly what I expected:

This is a classic riddle that challenges assumptions. The answer is:

The surgeon is the boy’s mother.

The riddle plays on the common stereotype that surgeons are male, which can lead people to overlook this straightforward explanation.

hnuser123456 20 hours ago | parent | next [-]

The surgeon could be the cousin's mom or dad. The cousin's dad didn't die in the crash, his cousin did. The question "how is this possible?" implies there is some sort of contradiction when there isn't any at all. It has nothing to do with sexism, and to say it does reflects a bias in the reader causing them to "spidey sense" a cultural bugaboo when it's utterly irrelevant in this scenario.

harrall 18 hours ago | parent [-]

Can someone explain to me how I read it wrong?

I read it as 2 cousins are in an accident and 1 of the cousins is the son of the surgeon.

What was the contradictory statement that I missed?

sebastialonso 13 hours ago | parent | next [-]

You read it right. There's no contradiction. The famous original bit started with "a man and his son". This bit is certainly part of the LLM's training corpus, so it's expected to acknowledg it when you mention it.

The thing is, you didn't mention that bit to the LLM. You mentioned a completely different scenario, basically two persons who happen to be cousins. But you used the same style when presenting it. The issue is not a hidden contradiction or a riddle, the issue is that the LLM completely ignored the logical consequences of the scenario you presented.

It's like asking it about the name of the brave greek hero in the battle where the famous Trojan Cow was present. If you get "Achilles" is obviously wrong, there was never a Trojan Cow to begin with!

judahmeek 18 hours ago | parent | prev [-]

There isn't a contradiction. Making the LLM look for a nonexistent contradiction is the point of this prompt.

allemagne 21 hours ago | parent | prev | next [-]

Yeah this is the issue with the prompt, it also slips up humans who gloss over "cousin".

I'm assuming that pointing this out leads you the human to reread the prompt and then go "ah ok" and adjust the way you're thinking about it. ChatGPT (and DeepSeek at least) will usually just double and triple down and repeat "this challenges gender assumptions" over and over.

roughly 20 hours ago | parent | next [-]

This is a good one. I just argued with Gemini 2.5pro for a few minutes on this and it absolutely would not yield.

21 hours ago | parent | prev | next [-]
[deleted]
20 hours ago | parent | prev | next [-]
[deleted]
crazygringo 21 hours ago | parent | prev [-]

Oh, I see. I mean yeah, I totally missed that too.

I mean, when we see something we recognize at a glance, we don't usually double-check if it's exactly correct, because we assume the speaker isn't trying to trick us.

I make typos in ChatGPT prompts all the time, and I'm consistently impressed because it answers what I actually meant to ask. So to me, this feels like a feature not a bug.

I followed up with:

> So the first five words are, "a man and his cousin" -- not his son. Does that change the riddle?

And got back:

> Great observation — yes, that wording does make the riddle more subtle.

Let’s break it down:

• “A man and his cousin are in a car crash.”

• “The man dies, but the cousin is taken to the emergency room.”

• At the OR, the surgeon says: “I cannot operate on him. He’s my son.”

So if the cousin is the one taken to the OR, and the surgeon says he’s my son, then the cousin is the surgeon’s son.

Seems fine to me?

allemagne 19 hours ago | parent [-]

I don't know about it being a feature, but humans can definitely make the same mistake, and the follow-up here admittedly doesn't seem too egregious.

To me it's still a little "off" considering that your conversational partner just got called out for missing a detail that should change their entire reading of the riddle, but it's totally fair that you can imagine realistic human emotional states to explain it.

FWIW, when I tried the same thing with GPT-4o the model went off bizarrely speculating about misdirection about "the man" being the surgeon's son:

>Good catch—and yes, that's exactly where the misdirection comes in.

The riddle says:

    “A man and his cousin are in a car crash. The man dies, but the cousin is taken to the ER...”
Then the surgeon says:

    “I cannot operate on him. He’s my son.”
So here's the trick:

    The man who died is not the surgeon's son.

    The cousin who survived is the surgeon's son.
The confusion comes from people assuming that “the man” who died must be the son. But the riddle never says that. It’s a subtle shift of attention designed to trip you up. Clever, right?
abenga 21 hours ago | parent | prev [-]

That is the exact wrong answer that all models give.

krick 20 hours ago | parent [-]

Technically, it isn't "wrong". It well could be the guy's mother. But I'm nitpicking, it actually is a good example. I tried ChatGPT twice in new chats, with and without "Reason", and both times it gave me nonsensical explanations to "Why mother? Couldn't it be a father?" I was actually kinda surprised, since I expected "reasoning" to fix it, but it actually made things worse.