Remix.run Logo
LLM Hallucinations in the Wild(arxiv.org)
3 points by anygivnthursday 7 hours ago | 1 comments
anygivnthursday 7 hours ago | parent [-]

> Large language models (LLMs) are known to generate plausible but false information across a wide range of contexts, yet the real-world magnitude and consequences of this hallucination problem remain poorly understood. Here we leverage a uniquely verifiable object - scientific citations - to audit 111 million references across 2.5 million papers in arXiv, bioRxiv, SSRN, and PubMed Central.