| ▲ | kombookcha 3 hours ago | |||||||
LLMs by definition cannot deduce, because they cannot not know or think. There's guard rails to try to make it more correct than wrong, but ultimately it's about which words seem like they would fit when coming after your words. It's a neat trick, but the mind wants to ascribe meaning and reason to words that sound meaningful and reasonable, but these words do not come from a thinking mind with intent and interiority. It would be much more interesting if they did, but when and if that does happen, it won't be from an LLM as we know them today. | ||||||||
| ▲ | dlm24 an hour ago | parent [-] | |||||||
Ye agreed "deduce" bad choice of words. If you tell LLM "explain X and cite reliable sources" would that then be more accurate? Maybe it's the way the users are asking the questions, and perhaps prompting in the right way will lead to better (more accurate) results and reduce hallucinations? | ||||||||
| ||||||||