▲ | barnacs 4 days ago | |
> Thinking is using what you know to come to a logical conclusion What LLMs do is using what they have _seen_ to come to a _statistical_ conclusion. Just like a complex statistical weather forecasting model. I have never heard anyone argue that such models would "know" about weather phenomena and reason about the implications to come to a "logical" conclusion. | ||
▲ | chpatrick 4 days ago | parent [-] | |
I think people misunderstand when they see that it's a "statistical model". That just means that out of a range of possible answers, it picks in a humanlike way. If the logical answer is the humanlike thing to say then it will be more likely to sample it. In the same way a human might produce a range of answers to the same question, so humans are also drawing from a theoretical statistical distribution when you talk to them. It's just a mathematical way to describe an agent, whether it's an LLM or human. |