▲ | mcphage 5 days ago | |||||||||||||||||||||||||||||||
LLMs don’t know the difference between true and false, or that there even is a difference between true and false, so I think it’s OpenAI whose definition is not useful. As for widely agreed upon, well, I’m assuming the purpose of this post is to try and reframe the discussion. | ||||||||||||||||||||||||||||||||
▲ | hodgehog11 5 days ago | parent [-] | |||||||||||||||||||||||||||||||
If an LLM outputs a statement, that is by definition either true or false, then we can know whether it is true or false. Whether the LLM "knows" is irrelevant. The OpenAI definition is useful because it implies hallucination is something that can be logically avoided. > I’m assuming the purpose of this post is to try and reframe the discussion It's to establish a meaningful and practical definition of "hallucinate" to actually make some progress. If everything is a hallucination as the other comments seem to suggest, then the term is a tautology and is of no use to us. | ||||||||||||||||||||||||||||||||
|