Remix.run Logo
dudeinhawaii 11 hours ago

That's a fair point. What I find interesting (and perhaps didn't articulate properly) isn't confusion about the LLM's behavior, but the question of whether human cognition might operate on similar principles at a fundamental level - just via different mechanisms and with better calibration (similar algorithm, different substrate), which is why I used human examples at the start.

When I see an LLM confidently generate an answer about a non-existent thing by associating related concepts, I wonder how different is this from humans confidently filling knowledge gaps with our own probability-based assumptions? We do this constantly - connecting dots based on pattern recognition and making statistical leaps between concepts.

If we understand how human minds worked in their entirety, then I'd be more likely to say "ha, stupid LLM, it hallucinates instead of saying I don't know". But, I don't know, I see a strong similarity to many humans. What are weight and biases but our own heavy-weight neural "nodes" built up over a lifetime to say "this is likely to be true because of past experiences"? I say this with only hobbyist understanding of neural science topics mind you.