Remix.run Logo
andy99 11 hours ago

Classical LLM hallucination happens because AI doesn’t have a world model. It can’t compare what it’s saying to anything.

You’re right that LLMs favor helpfulness so they may just make things up when they don’t know them, but this alone doesn’t capture the crux of hallucination imo, it’s deeper than just being overconfident.

OTOH, there was an interesting article recently that I’ll try to find saying humans don’t really have a world model either. While I take the point, we can have one when we want to.

Edit: see https://www.astralcodexten.com/p/in-search-of-ai-psychosis re humans not having world models

naniwaduni 9 hours ago | parent [-]

You're right, "journalists don't have a world model and can't compare what they're saying to anything" explains a lot.