▲ | ClaraForm 3 days ago | |
Hey, I know what the article wanted to say, see the last two-ish sentences of my previous response. My point, is that the article might be mis-interpreting what the causes and solutions for the problems it sees. Relying on the brain as an example of how to improve might be a mistaken premise, because maybe the brain isn't doing what the article thinks it's doing. So we're in agreement there, that the brain and LLMs are incomparable, but maybe the parts where they're comparable are more informative on the nature of hallucinations than the author may think. | ||
▲ | n4r9 3 days ago | parent | next [-] | |
I think you can confidently say that brains do the following and LLMs don't: * Continuously updates its state based on sensory data * Retrieves/gathers information that correlates strongly with historic sensory input * Is able to associate propositions with specific instances of historic sensory input * Uses the above two points to verify/validate its belief in said propositions Describing how memories "feel" may confuse the matter, I agree. But I don't think we should be quick to dismiss the argument. | ||
▲ | HarHarVeryFunny 3 days ago | parent | prev [-] | |
But the thing is that humans don't hallucinate as much as LLMs, so it's the differences not similarities (such as they are) that are important to understand why that is. It's pretty obvious that an LLM not knowing what it does or does not know is a major part of it hallucinating, while humans do generally know the limits of their own knowledge. |