▲ | MattPalmer1086 4 days ago | |
I agree that saying everything is a hallucination doesn't help to narrow down on possible solutions. It does however make the point that hallucinations are not some special glitch which is distinct from the normal operation of the model. It's just outputting plausible text, which is right often enough to be useful. Adding in some extra sauce to help the model evaluate the correctness of answers, or when it doesn't know enough to give a good answer, is obviously one way to mitigate this otherwise innate behaviour. |