Remix.run Logo
ggm 17 hours ago

https://www.scientificamerican.com/blog/scicurious-brain/ign...

lgas 17 hours ago | parent [-]

I'm not 100% sure I'd call that a hallucination, but it's close enough and interesting enough that I'm happy to stand corrected.

bitwize 16 hours ago | parent [-]

When improper use of a statistical model generates bogus inferences in generative AI, we call the result a "hallucination"...

baq 13 hours ago | parent [-]

It should have been called confabulation, hallucination is not the correct analog, tech bros simply used the first word they thought of and it unfortunately stuck.

K0balt 11 hours ago | parent [-]

Undesirable output might be more accurate, since there is absolutely no difference in the process of creating a useful output vs a “hallucination” other than the utility of the resulting data.

I had a partially formed insight along these lines, that LLMs exist in this latent space of information that has so little external grounding. A sort of deeamspace. I wonder if embodying them in robots will anchor them to some kind of ground-truth source?