Remix.run Logo
HarHarVeryFunny 2 days ago

No - humans are the counter example.

If you want a model that doesn't hallucinate then train it to predict the truth, and give it a way to test it's predictions. For humans/animals the truth is the real world.

An LLM is trained to predict individual training sample continuations (a billion conflicting mini truths, not a single grounded one), whether those are excerpts from WikiPedia, or bathroom stall musings recalled on 4chan. Based on all this the LLM builds a predictive model which it is then not allowed to test at runtime.

So, yeah, we should stop doing that.