Remix.run Logo
juancroldan 3 days ago

The fundamental limitation of LLMs is that they represent knowledge as parametric curves, and their generalization is merely interpolation of those curves. This can only ever produce results that correlate with facts (training data), not ones that are causally derived from them, which makes hallucinations inevitable. Same as with human memory.