▲ | naasking 5 days ago | ||||||||||||||||||||||||||||||||||
> but rather the ability to reason in the general case, which requires the ability to LEARN to solve novel problems, which is what is missing from LLMs. I don't think it's missing, zero shot prompting is quite successful in many cases. Maybe you find the extent that LLMs can do this to be too limited, but I'm not sure that means they don't reason at all. > A system that has a fixed set of (reasoning/prediction) rules, but can't learn new ones for itself, seems better regarded as an expert system. I think expert systems are a lot more limited than LLMs, so I don't agree with that classification. LLMs can generate output that's out of distribution, for instance, which is not something that's classic expert systems can do (even if you think LLM OOD is still limited compared to humans). I've elaborated in another comment [1] what I think part of the real issue is, and why people keep getting tripped up by saying that pattern matching is not reasoning. I think it's perfectly fine to say that pattern matching is reasoning, but pattern matching has levels of expressive power. First-order pattern matching is limited (and so reasoning is limited), and clearly humans are capable of higher order pattern matching which is Turing complete. Transformers are also Turing complete, and neural networks can learn any function, so it's not a matter of expressive power, in principle. Aside from issues stemming from tokenization, I think many of these LLM failures are because they aren't trained in higher order pattern matching. Thinking models and the generalization seen from grokking are the first steps on this path, but it's not quite there yet. | |||||||||||||||||||||||||||||||||||
▲ | HarHarVeryFunny 5 days ago | parent [-] | ||||||||||||||||||||||||||||||||||
Powerful pattern matching is still just pattern matching. How is an LLM going to solve a novel problem with just pattern matching? Novel means it has never seen it before, maybe doesn't even have the knowledge needed to solve it, so it's not going to be matching any pattern, and even if it did, that would not help if it required a solution different to whatever the pattern match had come from. Human level reasoning includes ability to learn, so that people can solve novel problems, overcome failures by trial and error, exploration, etc. So, whatever you are calling "reasoning" isn't human level reasoning, and it's therefore not even clear what you are trying to say? Maybe just that you feel LLMs have room for improvement by better pattern matching? | |||||||||||||||||||||||||||||||||||
|