Remix.run Logo
b00ty4breakfast 4 hours ago

where do you think most of that info came from though? Not from the public library

dlm24 4 hours ago | parent [-]

I hear you, But my understanding is LLM can consume multiple sources of information and deduce what is the truth better and more accurately than a human clicking on multiple Google links and veryfying information and sources.

kombookcha an hour ago | parent | next [-]

LLMs by definition cannot deduce, because they cannot not know or think. There's guard rails to try to make it more correct than wrong, but ultimately it's about which words seem like they would fit when coming after your words.

It's a neat trick, but the mind wants to ascribe meaning and reason to words that sound meaningful and reasonable, but these words do not come from a thinking mind with intent and interiority. It would be much more interesting if they did, but when and if that does happen, it won't be from an LLM as we know them today.

lokar 4 hours ago | parent | prev | next [-]

Deduce? No.

beeflet 3 hours ago | parent | prev [-]

In inference/tool use it's doing the same thing that a human is doing in that regard. Just faster.

In training, it's a blind process. It's up to the trainers to feed the model accurate sources.