▲ | photonthug 3 days ago | ||||||||||||||||||||||||||||||||||
> Then no human understands chess Humans with correct models may nevertheless make errors in rule applications. Machines are good at applying rules, so when they fail to apply rules correctly, it means they have incorrect, incomplete, or totally absent models. Without using a word like “understands” it seems clear that the same apparent mistake has different causes.. and model errors are very different from model-application errors. In a math or physics class this is roughly the difference between carry-the-one arithmetic errors vs using an equation from a completely wrong domain. The word “understands” is loaded in discussion of LLMs, but everyone knows which mistake is going to get partial credit vs zero credit on an exam. | |||||||||||||||||||||||||||||||||||
▲ | og_kalu 3 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
>Humans with correct models may nevertheless make errors in rule applications. Ok >Machines are good at applying rules, so when they fail to apply rules correctly, it means they have incorrect or incomplete models. I don't know why people continue to force the wrong abstraction. LLMs do not work like 'machines'. They don't 'follow rules' the way we understand normal machines to 'follow rules'. >so when they fail to apply rules correctly, it means they have incorrect or incomplete models. Everyone has incomplete or incorrect models. It doesn't mean we always say they don't understand. Nobody says Newton didn't understand gravity. >Without using a word like “understands” it seems clear that the same apparent mistake has different causes.. and model errors are very different from model-application errors. It's not very apparent no. You've just decided it has different causes because of preconceived notions on how you think all machines must operate in all configurations. LLMs are not the logic automatons in science fiction. They don't behave or act like normal machines in any way. The internals run some computations to make predictions but so does your nervous system. Computation is substrate-independent. I don't even know how you can make this distinction without seeing what sort of illegal moves it makes. If it makes the sort high rated players make then what ? | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
▲ | bawolff 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
This feels more like a metaphysical argument about what it means to "know" something, which is really irrelevant to what is interesting about the article. | |||||||||||||||||||||||||||||||||||
▲ | sixfiveotwo 3 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
> Machines are good at applying rules, so when they fail to apply rules correctly, it means they have incorrect, incomplete, or totally absent models. That's assuming that, somehow, a LLM is a machine. Why would you think that? | |||||||||||||||||||||||||||||||||||
|