| ▲ | mbesto 8 hours ago | ||||||||||||||||
> LLMs hallucinate because training on source material is a lossy process and bigger, LLMs hallucinate because they are probabilistic by nature not because the source material is lossy or too big. They are literally designed to create some level of "randomness" https://thinkingmachines.ai/blog/defeating-nondeterminism-in... | |||||||||||||||||
| ▲ | ChadNauseam 6 hours ago | parent [-] | ||||||||||||||||
So if you set temperature=0 and run the LLM serially (making it deterministic) it would stop hallucinating? I don't think so. I would guess that the nondeterminism issues mentioned in the article are not at all a primary cause of hallucinations. | |||||||||||||||||
| |||||||||||||||||