▲ | p_v_doom 2 days ago | |
The problem is not so much IMO that all models hallucinate. Its more that our entire reality, especially as expressed through the training data - text, is entirely constructed. There is no difference in the world made by the text, say when it comes to the reality of Abraham Lincoln and Bilbo Baggins. We often talk about the later as if he is just as real. Is Jesus real? Is Jesus god? Is it hallucination to claim the one you dont agree with? We cant even agree amongst oursevles what is real and what is not. What we perceive as "not hallucination" is merely a very big consensus supported by education, culture, personal beliefs and varies quite a bit. And little in the existence of the model gives it the tools to make those distinctions. Quite the opposite | ||
▲ | pegasus 7 hours ago | parent [-] | |
What you describe is called the grounding problem. But it's only a problem for those who vainly hope that these models will somehow miraculously evolve into autonomous, sentient beings. But that's not the only way this technology can be incredibly useful to humanity. Or detrimental for that matter. It has the potential to amplify our intelligence to a degree which is likely to radically transform our world. |