| ▲ | dlm24 2 hours ago | |
Ye agreed "deduce" bad choice of words. If you tell LLM "explain X and cite reliable sources" would that then be more accurate? Maybe it's the way the users are asking the questions, and perhaps prompting in the right way will lead to better (more accurate) results and reduce hallucinations? | ||
| ▲ | kombookcha an hour ago | parent [-] | |
I think the fundamental problem is that humans use language to refer to things and constructs that exist and have various relationships with eachother in meatspace, whereas LLMs use words solely as things that exist in relation to other words. That's inherently lossy if you're trying to make it fetch and regurgitate information encoded in the former format. While the ability to interface with a computer program in plain language is the really interesting thing here IMO, it also comes with a number of problems baked in that are worse than person-to-person transfers of text-speech. Your monkey brain is actually quite good at figuring out if other monkeys are bullshitting you and what they mean, because you can make use of a vast number of small cues and unconscious tells in what they say and how they say it - even in writing. With an LLM, you cannot do this because it will always have the same confident can-do zeal with everything you ask it for. | ||