▲ | tim333 6 days ago | ||||||||||||||||||||||
Humans can do symbolic understanding that seems to rest on a rather flakey probabilistic neural network in our brains, or at least mine does. I can do maths and the like but there's quite a lot of trial and error and double checking things involved. GPT5 said it thinks it's fixable when I asked it: >Marcus is right that LLMs alone are not the full story of reasoning. But the evidence so far suggests the gap can be bridged—either by scaling, better architectures, or hybrid neuro-symbolic approaches. | |||||||||||||||||||||||
▲ | afiori 6 days ago | parent | next [-] | ||||||||||||||||||||||
I sorta agree with you, but replying to "LLM can't reason" with "an LLM says they do" is wild | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | wolvesechoes 6 days ago | parent | prev [-] | ||||||||||||||||||||||
And I though that the gap is bridged by giving another billions to Sam Altman |