Remix.run Logo
justcallmejm 2 days ago

This is why a neurosymbolic system is necessary, which Aloe (https://aloe.inc) recently demonstrated exceeds performance of frontier models, using a model agnostic approach.

HarHarVeryFunny 2 days ago | parent [-]

No - humans are the counter example.

If you want a model that doesn't hallucinate then train it to predict the truth, and give it a way to test it's predictions. For humans/animals the truth is the real world.

An LLM is trained to predict individual training sample continuations (a billion conflicting mini truths, not a single grounded one), whether those are excerpts from WikiPedia, or bathroom stall musings recalled on 4chan. Based on all this the LLM builds a predictive model which it is then not allowed to test at runtime.

So, yeah, we should stop doing that.