▲ | gobdovan 3 days ago | |||||||||||||||||||
OpenAI just recently took a systematic look into why models hallucinate [0][1]. The article you shared raises an interesting point by comparing human memory with LLMs, but I think the analogy can only go so far. They're too distinct to explain hallucinations simply as a lack of meta-cognition or meta-memory. These systems are more like alien minds, and allegories risk introducing imprecision when we're trying to debug and understand their behavior. OpenAI's paper instead identifies hallucinations as a bug in training objectives and benchmarks, and is grounding the explanation in first principles and the mechanics of ML. Metaphors are useful for creativity, but less so when it comes to debugging and understanding, especially now that the systematic view is this advanced. [0] https://openai.com/index/why-language-models-hallucinate/?ut... [1] https://cdn.openai.com/pdf/d04913be-3f6f-4d2b-b283-ff432ef4a... | ||||||||||||||||||||
▲ | K0balt 2 days ago | parent | next [-] | |||||||||||||||||||
Hallucinations are actually not a malfunction or any other process outside of the normal functioning of the model. They are merely an output that we find unuseful, but in all other ways is optimal based on the training data, context, and model precision and parameters being used. I honestly have no idea why OAI felt that they needed to publish a “paper” about this, since it is blazingly obvious to anyone who understands the fundamentals of transformer inference, but here we are. The confusion on this topic comes from calling these suboptimal outputs “hallucinations” which drags anthropomorphic fallacies into the room by their neck even though they were peacefully minding their own business down the corridor on the left. “Hallucination” implies a fundamentally fixable error in inference, a malfunction of thought caused by a pathology or broken algorithm. LLMs “Hallucinating” are working precisely as implemented, only we don’t feel like the output usefully matches the parameters from a human perspective. It’s just unhelpful results from the algorithm, like any other failure of training, compression, alignment, or optimisation. | ||||||||||||||||||||
| ||||||||||||||||||||
▲ | aszen 3 days ago | parent | prev [-] | |||||||||||||||||||
I haven't read the full paper yet, but my intuition is that hallucinations are a byproduct of models having too much information that needs to be compressed for generalizing. We already know that larger models hallucinate less since they can store more information, are there any smaller models which hallucinate less | ||||||||||||||||||||
|