▲ | og_kalu 3 days ago | |||||||||||||||||||||||||
>Humans with correct models may nevertheless make errors in rule applications. Ok >Machines are good at applying rules, so when they fail to apply rules correctly, it means they have incorrect or incomplete models. I don't know why people continue to force the wrong abstraction. LLMs do not work like 'machines'. They don't 'follow rules' the way we understand normal machines to 'follow rules'. >so when they fail to apply rules correctly, it means they have incorrect or incomplete models. Everyone has incomplete or incorrect models. It doesn't mean we always say they don't understand. Nobody says Newton didn't understand gravity. >Without using a word like “understands” it seems clear that the same apparent mistake has different causes.. and model errors are very different from model-application errors. It's not very apparent no. You've just decided it has different causes because of preconceived notions on how you think all machines must operate in all configurations. LLMs are not the logic automatons in science fiction. They don't behave or act like normal machines in any way. The internals run some computations to make predictions but so does your nervous system. Computation is substrate-independent. I don't even know how you can make this distinction without seeing what sort of illegal moves it makes. If it makes the sort high rated players make then what ? | ||||||||||||||||||||||||||
▲ | photonthug 3 days ago | parent [-] | |||||||||||||||||||||||||
I can’t tell if you are saying the distinction between model errors and model-application errors doesn’t exist or doesn’t matter or doesn’t apply here. | ||||||||||||||||||||||||||
|