▲ | freejazz 5 days ago | ||||||||||||||||
It seems readily apparent there is a difference given their inability to do tasks we would otherwise reasonably describe as achievable via basic reasoning on the same facts. | |||||||||||||||||
▲ | naasking 5 days ago | parent [-] | ||||||||||||||||
I agree LLMs have many differences in abilities relative to humans. I'm not sure what this implies for their ability to reason though. I'm not even sure what examples about their bad reasoning can prove about the presence or absence of any kind of "reasoning", which is why I keep asking for definitions to remove the ambiguity. If examples of bad reasoning sufficed, then this would prove that humans can't reason either, which is silly. A rigourous definition of "reasoning" is challenging though, which is why people consistently can't provide a general one that's satisfactory when I ask, and this is why I'm skeptical that pattern matching isn't a big part of it. Arguments that LLMs are "just pattern matching" are thus not persuasive arguments that they are not "reasoning" at some cruder level. Maybe humans are just higher order pattern matchers and LLMs are only first or second-order pattern matchers. Maybe first-order pattern matching shouldn't count as "reasoning", but should second-order? Third-order? Is there evidence or some proof that LLMs couldn't be trained to be higher order pattern matchers, even in principle? None of the arguments or evidence I've seen about LLMs and reasoning is rigourous or persuasive on these questions. | |||||||||||||||||
|