| ▲ | johnisgood 19 hours ago | |
Hallucinations are not novel ideas. They are novel combinations of tokens constrained by learned probability distributions. I have mentioned Hume before, and will do so again. You can combine "golden" and "mountain" without seeing a golden mountain, but you cannot conjure "golden" without having encountered something that gave you the concept. LLMs may generate strings they have not seen, but those strings are still composed entirely from training-derived representations. The model can output "quantum telepathic blockchain" but each token's semantic content comes from training data. It is recombination, not creation. The model has not built representations of concepts it never encountered in training; it is just sampling poorly constrained combinations. Can you distinguish between a false hallucination and a genuinely novel conceptual representation? | ||