▲ | saghm 5 days ago | |||||||||||||
> so if you ask, "what is the capital of colorado" and it answers "denver" calling it a Hallucination is nihilistic nonsense that paves over actually stopping to try and understand important dynamics happening in the llm matrices On the other hand, calling it anything other than a hallucination misrepresents the idea of truth as being something that these models have any ability to differentiate between their outputs based on whether they accurately reflect reality by conflating a fundamentally unsolved problem as an engineering tradeoff. | ||||||||||||||
▲ | ComplexSystems 5 days ago | parent [-] | |||||||||||||
It isn't a hallucination because that isn't how the term is defined. The term "hallucination" refers, very specifically, to "plausible but false statements generated by language models." At the end of the day, the goal is to train models that are able to differentiate between true and false statements, at least to a much better degree than they can now, and the linked article seems to have some very interesting suggestions about how to get them to do that. | ||||||||||||||
|