▲ | freehorse 5 days ago | |||||||
> What bothers me about the hot takes is the claim that “all models do is hallucinate.” That collapses the distinction entirely That is a problem for "Open"AI because they want to sell their products, and because they want to claim that LLMs will scale to superintelligence. Not for others. "Bad" hallucinations come in different forms, and what the article describes is one of them. Not all of them come from complete uncertainty. There are also the cases where the LLM is hallucinating functions in a library, or they reverse cause and effect when summarising a complex article. Stuff like this still happen all the time, even with SOTA models. They do not happen because the model is bad with uncertainty, they have nothing to do with knowledge uncertainty. Esp stuff like producing statements that misinterpret causal relationships within text, imo, reveals exactly the limits of the architectural approach. | ||||||||
▲ | p_v_doom 2 days ago | parent [-] | |||||||
The problem is not so much IMO that all models hallucinate. Its more that our entire reality, especially as expressed through the training data - text, is entirely constructed. There is no difference in the world made by the text, say when it comes to the reality of Abraham Lincoln and Bilbo Baggins. We often talk about the later as if he is just as real. Is Jesus real? Is Jesus god? Is it hallucination to claim the one you dont agree with? We cant even agree amongst oursevles what is real and what is not. What we perceive as "not hallucination" is merely a very big consensus supported by education, culture, personal beliefs and varies quite a bit. And little in the existence of the model gives it the tools to make those distinctions. Quite the opposite | ||||||||
|