Remix.run Logo
CuriouslyC 6 days ago

This article is accurate. That's why I'm investigating a bayesian symbolic lisp reasoner. It's incapable of hallucinating, it provides auditable traces which are actual programs and it kicks the crap out of LLMs at stuff like Arc-Agi, symbolic reasoning, logic programs, game playing, etc. I'm working on a paper where I show that the same model can break 80 on arc-agi, run the house by counting cards at blackjack, and solve complex mathematical word problems.

leptons 6 days ago | parent [-]

LLMs are also incapable of "hallucinating", so maybe that isn't the buzzword you should be using.