| ▲ | bigfishrunning an hour ago | ||||||||||||||||||||||||||||||||||
> And LLMs do have understanding. They absolutely do not. If you "ask it how it came up with the process in natural language" with some input, it will produce an output that follows, because of the statistics encoded in the model. That output may or may not be helpful, but it is likely to be stylistically plausible. An LLM does not think or understand; it is merely a statistical model (that's what the M stands for!) | |||||||||||||||||||||||||||||||||||
| ▲ | simianwords an hour ago | parent [-] | ||||||||||||||||||||||||||||||||||
how would you empirically disprove that it doesn't have understanding? i can prove that it does have understanding because it behaves exactly like a human with understanding does. if i ask it to solve an integral and ask it questions about it - it replies exactly as if it has understood. give me a specific example so that we can stress test this argument. for example: what if we come up with a new board game with a completely new set of rules and see if it can reason about it and beat humans (or come close)? | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||