▲ | aszen 3 days ago | |
I haven't read the full paper yet, but my intuition is that hallucinations are a byproduct of models having too much information that needs to be compressed for generalizing. We already know that larger models hallucinate less since they can store more information, are there any smaller models which hallucinate less | ||
▲ | gobdovan 2 days ago | parent | next [-] | |
I'd recommend checking out the full conclusions section. What I can tell you is that with LLMs, it's never a linear correlation. There's always some balance you have to strike, as they really do operate on a changing-anything-changes-everything basis. excerpt: Claim: Avoiding hallucinations requires a degree of intelligence which is exclusively achievable with larger models. Finding: It can be easier for a small model to know its limits. For example, when asked to answer a Māori question, a small model which knows no Māori can simply say “I don’t know” whereas a model that knows some Māori has to determine its confidence. As discussed in the paper, being “calibrated” requires much less computation than being accurate. | ||
▲ | K0balt 2 days ago | parent | prev | next [-] | |
Hallucinations are actually not a malfunction or any other process outside of the normal functioning of the model. They are merely an output that we find unuseful, but in all other ways is optimal based on the training data, context, and model precision and parameters being used. | ||
▲ | euroderf 2 days ago | parent | prev [-] | |
One robot's "hallucination" is another robot's "connecting the dots" or "closing the circle". |