| ▲ | sethhochberg 12 hours ago | |||||||||||||||||||||||||
I know this is written to be tounge-in-cheek, but its really almost the exact same problem playing out on both sides. LLMs hallucinate because training on source material is a lossy process and bigger, heavier LLM-integrated systems that can research and cite primary sources are slow and expensive so few people use those techniques by default. Lowest time to a good enough response is the primary metric. Journalists oversimplify and fail to ask followup questions because while they can research and cite primary sources, its slow and expensive in an infinitesimally short news cycle so nobody does that by default. Whoever publishes something that someone will click on first gets the ad impressions so thats the primary metric. In either case, we've got pretty decent tools and techniques for better accuracy and education - whether via humans or LLMs and co - but most people, most of the time don't value them. | ||||||||||||||||||||||||||
| ▲ | mbesto 8 hours ago | parent | next [-] | |||||||||||||||||||||||||
> LLMs hallucinate because training on source material is a lossy process and bigger, LLMs hallucinate because they are probabilistic by nature not because the source material is lossy or too big. They are literally designed to create some level of "randomness" https://thinkingmachines.ai/blog/defeating-nondeterminism-in... | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | andy99 11 hours ago | parent | prev [-] | |||||||||||||||||||||||||
Classical LLM hallucination happens because AI doesn’t have a world model. It can’t compare what it’s saying to anything. You’re right that LLMs favor helpfulness so they may just make things up when they don’t know them, but this alone doesn’t capture the crux of hallucination imo, it’s deeper than just being overconfident. OTOH, there was an interesting article recently that I’ll try to find saying humans don’t really have a world model either. While I take the point, we can have one when we want to. Edit: see https://www.astralcodexten.com/p/in-search-of-ai-psychosis re humans not having world models | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||