Remix.run Logo
raincole 4 days ago

Yeah. Is it even proven that LLMs don't hallucinate for smaller tasks? The author seems to imply that. I fail to see how it could be true.

adastra22 4 days ago | parent [-]

No? That is trivially not the case. Ask an LLM something outside its training data and it will hallucinate the answer. How can it do anything else? Maybe its hallucination ends up being correct, but not all of the time.