Remix.run Logo
kombookcha 3 hours ago

LLMs by definition cannot deduce, because they cannot not know or think. There's guard rails to try to make it more correct than wrong, but ultimately it's about which words seem like they would fit when coming after your words.

It's a neat trick, but the mind wants to ascribe meaning and reason to words that sound meaningful and reasonable, but these words do not come from a thinking mind with intent and interiority. It would be much more interesting if they did, but when and if that does happen, it won't be from an LLM as we know them today.

dlm24 an hour ago | parent [-]

Ye agreed "deduce" bad choice of words.

If you tell LLM "explain X and cite reliable sources" would that then be more accurate?

Maybe it's the way the users are asking the questions, and perhaps prompting in the right way will lead to better (more accurate) results and reduce hallucinations?

kombookcha an hour ago | parent [-]

I think the fundamental problem is that humans use language to refer to things and constructs that exist and have various relationships with eachother in meatspace, whereas LLMs use words solely as things that exist in relation to other words. That's inherently lossy if you're trying to make it fetch and regurgitate information encoded in the former format.

While the ability to interface with a computer program in plain language is the really interesting thing here IMO, it also comes with a number of problems baked in that are worse than person-to-person transfers of text-speech.

Your monkey brain is actually quite good at figuring out if other monkeys are bullshitting you and what they mean, because you can make use of a vast number of small cues and unconscious tells in what they say and how they say it - even in writing. With an LLM, you cannot do this because it will always have the same confident can-do zeal with everything you ask it for.