Remix.run Logo
yreg 4 days ago

Maybe it goes against the definition but I like saying that _all_ output is a hallucination, when explaining LLMs.

It just happens that a lot of that output is useful/corresponding with the real world.

kelnos 4 days ago | parent [-]

Well yes, it goes against the accepted definition. And if all output is hallucination, then it's not really a useful way to describe anything, so why bother?

MattPalmer1086 4 days ago | parent | next [-]

I agree that saying everything is a hallucination doesn't help to narrow down on possible solutions.

It does however make the point that hallucinations are not some special glitch which is distinct from the normal operation of the model. It's just outputting plausible text, which is right often enough to be useful.

Adding in some extra sauce to help the model evaluate the correctness of answers, or when it doesn't know enough to give a good answer, is obviously one way to mitigate this otherwise innate behaviour.

drekipus 4 days ago | parent | prev | next [-]

But it's the perfect definition because it shows what it is. The output is a hallucination in what it thinks you want, which you can use for better form prompts or the like.

To say "it only hallucinates sometimes" is burying the lede and confusing for people who are trying to use it

Q: How do I stop Hallucinations? A: useless question, because you can't. It is the mechanism that gives you what you want

yreg 3 days ago | parent | prev [-]

I find it useful to underline the intrinsic properties of LLMs. When an LLM makes up something untrue, it's not a 'bug'.

I think that thinking of all LLM output as 'hallucinations' while making use of the fact that these hallucinations are often true for the real world is a good mindset, especially for nontechnical people, who might otherwise not realise.