Remix.run Logo
tim333 6 days ago

Humans can do symbolic understanding that seems to rest on a rather flakey probabilistic neural network in our brains, or at least mine does. I can do maths and the like but there's quite a lot of trial and error and double checking things involved.

GPT5 said it thinks it's fixable when I asked it:

>Marcus is right that LLMs alone are not the full story of reasoning. But the evidence so far suggests the gap can be bridged—either by scaling, better architectures, or hybrid neuro-symbolic approaches.

afiori 6 days ago | parent | next [-]

I sorta agree with you, but replying to "LLM can't reason" with "an LLM says they do" is wild

tim333 6 days ago | parent | next [-]

I don't have a strong opinion if LLMs can reason or not. I think they can a bit but not very well. I think that also applies to many humans though. I was stuck that to my eyes GPT5's take on the question seemed better thought out than Garry Marcus's who is pretty biased to the LLMs are rubbish school.

afiori 4 hours ago | parent [-]

Most of the reasonings for the impossibility of intelligence in LLMs either require very restricted environments (chatgpt might not be able to tell how many r are in strawberry, but it can write a python script to do so and it could call it if given access to shell or similar and it can understand the answer) or implicitly imply that human brains have magic powers beyond turing completeness.

JohnKemeny 6 days ago | parent | prev [-]

I asked ChatGPT and it agrees with the statement that it is indeed wild

wolvesechoes 6 days ago | parent | prev [-]

And I though that the gap is bridged by giving another billions to Sam Altman