| ▲ | dlm24 2 days ago | ||||||||||||||||
I hear you, But my understanding is LLM can consume multiple sources of information and deduce what is the truth better and more accurately than a human clicking on multiple Google links and veryfying information and sources. | |||||||||||||||||
| ▲ | kombookcha 2 days ago | parent | next [-] | ||||||||||||||||
LLMs by definition cannot deduce, because they cannot not know or think. There's guard rails to try to make it more correct than wrong, but ultimately it's about which words seem like they would fit when coming after your words. It's a neat trick, but the mind wants to ascribe meaning and reason to words that sound meaningful and reasonable, but these words do not come from a thinking mind with intent and interiority. It would be much more interesting if they did, but when and if that does happen, it won't be from an LLM as we know them today. | |||||||||||||||||
| |||||||||||||||||
| ▲ | lokar 2 days ago | parent | prev | next [-] | ||||||||||||||||
Deduce? No. | |||||||||||||||||
| ▲ | b00ty4breakfast 2 days ago | parent | prev | next [-] | ||||||||||||||||
they don't deduce the truth; they're producing statistical predictions about what words go together (more or less; llm nerds dont @me). If you fed an LLM a bunch of homeopathy texts, it's not going to tell you why homeopathy doesn't work. | |||||||||||||||||
| ▲ | beeflet 2 days ago | parent | prev [-] | ||||||||||||||||
In inference/tool use it's doing the same thing that a human is doing in that regard. Just faster. In training, it's a blind process. It's up to the trainers to feed the model accurate sources. | |||||||||||||||||