Remix.run Logo
juancn 5 days ago

This is fluff, hallucinations are not avoidable with current models since those are part of the latent space defined by the model and the way we explore it, you'll always find some.

Inference is kinda like doing energy minimization on a high dimensional space, the hallucination is already there, for some inputs you're bound to find them.

kdnvk 5 days ago | parent [-]

Did you read the linked paper?

ninetyninenine 5 days ago | parent [-]

The majority of people on this thread didn't even click on the link. People are so taken by their own metaphysical speculations of what an LLM is.

Like literally the inventor of the LLM wrote an article and everyone is criticizing that article without even reading it. Most of these people have never built an LLM before either.

player1234 3 days ago | parent [-]

Altman simp