▲ | m13rar 6 days ago | |||||||
Sucking up does appear to be a personality trait. Hallucinations are not a completely known or well understood yet. We are past the stage that they're producing random outputs of strings. Frontier models can perform an imitation of reasoning but the hallucination aspect seems to be more towards an inability to learn past it's training data or properly update it's neural net learnings when new evidence is presented. Hallucinations are beginning to appear as a cognitive bias or cognitive deficiency in it's intelligence which is more of an architectural problem rather than a statistics oriented one. | ||||||||
▲ | petesergeant 6 days ago | parent [-] | |||||||
> Hallucinations are not a completely known or well understood yet. Is that true? Is it anything more complicated than LLMs producing text optimized for plausibility rather than for any sort of ground version of truth? | ||||||||
|