Remix.run Logo
bitwize 16 hours ago

When improper use of a statistical model generates bogus inferences in generative AI, we call the result a "hallucination"...

baq 13 hours ago | parent [-]

It should have been called confabulation, hallucination is not the correct analog, tech bros simply used the first word they thought of and it unfortunately stuck.

K0balt 11 hours ago | parent [-]

Undesirable output might be more accurate, since there is absolutely no difference in the process of creating a useful output vs a “hallucination” other than the utility of the resulting data.

I had a partially formed insight along these lines, that LLMs exist in this latent space of information that has so little external grounding. A sort of deeamspace. I wonder if embodying them in robots will anchor them to some kind of ground-truth source?