Remix.run Logo
robotcapital 5 days ago

It’s interesting that most of the comments here read like projections of folk-psych intuitions. LLMs hallucinate because they “think” wrong, or lack self-awareness, or should just refuse. But none of that reflects how these systems actually work. This is a paper from a team working at the state of the art, trying to explain one of the biggest open challenges in LLMs, and instead of engaging with the mechanisms and evidence, we’re rehashing gut-level takes about what they must be doing. Fascinating.

renewiltord 4 days ago | parent | next [-]

It's always the most low-brow takes as well. But the majority of Hacker News commentators "hallucinate" most of their comments in the first place, since they simply regurgitate the top answers based on broad bucketing of subject matter.

Facebook? "Steal your data"

Google? "Kill your favourite feature"

Apple? "App Store is enemy of the people"

OpenAI? "More like ClosedAI amirite"

player1234 3 days ago | parent [-]

About the same way you regurgitate Sammy's cum in your mouth.

KajMagnus 4 days ago | parent | prev | next [-]

Yes, many _humans_ here hallucinate, sort of.

They apparently didn't read the article, or didn't understand i, or disregard from it. (Why, why, why?)

And they fail to realize that they don't know what they are talking about, nevertheless keep talking. Similar to an over confident AI.

On a discussion about hallucinating AIs, the humans start hallucinating.

KajMagnus 4 days ago | parent | prev | next [-]

Could one say that humans are trained very differently from AIs?

If we (humans) make confident guesses, but are wrong — then, others will look at us disappointedly, thinking "oh s/he doesn't know what s/he is talking about, I'm going to trust them a bit less hereafter". And we'll tend to feel shame and want to withdraw.

That's a pretty strong punishment, for being confidently wrong? Not that odd, then, that humans say "I'm not sure" more often than AIs?

zahlman 5 days ago | parent | prev [-]

Calling it a "hallucination" is anthropomorphizing too much in the first place, so....

razzmatazmania 5 days ago | parent | next [-]

Confabulation is human behavioral phenomena that is not all that uncommon. Have you ever heard a grandpa big fish story? Have you ever pretended to know something you didn't because you wanted approval or to feel confident? Have you answered a test question wrong when you thought you were right? What I find fascinating about these models is they are already more intelligent and reliable than the worst humans. I've known plenty of people who struggle to conceptualize and connect information and are helpless outside of dealing with familiar series of facts or narratives. That these models aren't even as large as human brains makes me suspect that practical hardware limits might still be in play here.

robotcapital 5 days ago | parent | prev [-]

Right, that’s kind of my point. We call it “hallucination” because we don’t understand it, but need a shorthand to convey the concept. Here’s a paper trying to demystify it so maybe we don’t need to make up anthropomorphized theories.

player1234 3 days ago | parent [-]

We do nothing, they call it hallucination to decieve.

Altman simping all over.