▲ | K0balt a day ago | |
We can call it whatever, and yes, the answer is training- just like all things regarding the quality of LLM output per parameter count. The problem is that many people understand“hallucination” as a gross malfunction of an otherwise correctly functioning system, I.e. a defect that can/must be categorically “fixed”, not understanding that it is merely a function of trained weights, inference parameters, and prompt context that they can: A: probably work around by prompting and properly structuring tasks B: never completely rule out C: not avoid at all in certain classes of data transformations where it will creep in in subtle ways and corrupt the data D: not intrinsically detect, since it lacks the human characteristic of “woah, this is trippy, I feel like maybe I’m hallucinating “ These misconceptions stem from the fact that in LLM parlance, “hallucination” is often conflated with a same-named, relatable human condition that is largely considered completely discrete from normal conscious thought and workflows. Words and their meanings matter, and the failure to properly label things often is at the root of significant wastes of time and effort. Semantics are the point of language. |