▲ | fumeux_fume 5 days ago | ||||||||||||||||||||||||||||||||||||||||
In the article, OpenAI defines hallucinations as "plausible but false statements generated by language models." So clearly it's not all that LLMs know how to do. I don't think Parsons is working from a useful or widely agreed upon definition of what a hallucination is which leads to these "hot takes" that just clutter and muddy up the conversation around how to reduce hallucinations to produce more useful models. | |||||||||||||||||||||||||||||||||||||||||
▲ | mpweiher 5 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
They just redefined the term so that they no longer call hallucinations that are useful hallucinations. But the people who say everything LLMs do is hallucinate clearly also make that distinction, they just refuse to rename the useful hallucinations. "How many legs does a dog have if you call his tail a leg? Four. Saying that a tail is a leg doesn't make it a leg." -- Abraham Lincoln | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
▲ | mcphage 5 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
LLMs don’t know the difference between true and false, or that there even is a difference between true and false, so I think it’s OpenAI whose definition is not useful. As for widely agreed upon, well, I’m assuming the purpose of this post is to try and reframe the discussion. | |||||||||||||||||||||||||||||||||||||||||
|