Remix.run Logo
Isamu a day ago

Someone commented here that hallucination is what LLMs do, it’s the designed mode of selecting statistically relevant model data that was built on the training set and then mashing it up for an output. The outcome is something that statistically resembles a real citation.

Creating a real citation is totally doable by a machine though, it is just selecting relevant text, looking up the title, authors, pages etc and putting that in canonical form. It’s just that LLMs are not currently doing the work we ask for, but instead something similar in form that may be good enough.

make3 13 hours ago | parent | next [-]

This interpretation would have been ok for old generation models without search tools enabled and without reliable tool use and reasoning. Modern LLMs can actually look up the existence of papers with web search, and with reasoning, one can definitely get reasonable results by requiring the model to double check that everything actually exists.

a day ago | parent | prev [-]
[deleted]