Remix.run Logo
dlm24 2 days ago

I hear you, But my understanding is LLM can consume multiple sources of information and deduce what is the truth better and more accurately than a human clicking on multiple Google links and veryfying information and sources.

kombookcha 2 days ago | parent | next [-]

LLMs by definition cannot deduce, because they cannot not know or think. There's guard rails to try to make it more correct than wrong, but ultimately it's about which words seem like they would fit when coming after your words.

It's a neat trick, but the mind wants to ascribe meaning and reason to words that sound meaningful and reasonable, but these words do not come from a thinking mind with intent and interiority. It would be much more interesting if they did, but when and if that does happen, it won't be from an LLM as we know them today.

dlm24 2 days ago | parent [-]

Ye agreed "deduce" bad choice of words.

If you tell LLM "explain X and cite reliable sources" would that then be more accurate?

Maybe it's the way the users are asking the questions, and perhaps prompting in the right way will lead to better (more accurate) results and reduce hallucinations?

kombookcha 2 days ago | parent [-]

I think the fundamental problem is that humans use language to refer to things and constructs that exist and have various relationships with eachother in meatspace, whereas LLMs use words solely as things that exist in relation to other words. That's inherently lossy if you're trying to make it fetch and regurgitate information encoded in the former format.

While the ability to interface with a computer program in plain language is the really interesting thing here IMO, it also comes with a number of problems baked in that are worse than person-to-person transfers of text-speech.

Your monkey brain is actually quite good at figuring out if other monkeys are bullshitting you and what they mean, because you can make use of a vast number of small cues and unconscious tells in what they say and how they say it - even in writing. With an LLM, you cannot do this because it will always have the same confident can-do zeal with everything you ask it for.

lokar 2 days ago | parent | prev | next [-]

Deduce? No.

b00ty4breakfast 2 days ago | parent | prev | next [-]

they don't deduce the truth; they're producing statistical predictions about what words go together (more or less; llm nerds dont @me). If you fed an LLM a bunch of homeopathy texts, it's not going to tell you why homeopathy doesn't work.

beeflet 2 days ago | parent | prev [-]

In inference/tool use it's doing the same thing that a human is doing in that regard. Just faster.

In training, it's a blind process. It's up to the trainers to feed the model accurate sources.