Remix.run Logo
vidarh 4 days ago

It is. As it stands, throw a loop around an LLM and act as the tape, and an LLM can obviously be made Turing complete (you can get it to execute all the steps of a minimal Turing machine, so drop temperature so its deterministic, and you have a Turing complete system). To argue that they can't be made to reason is effectively to argue that there is some unknown aspect of the brain that allows us to compute functions not in the Turing computable set, which would be an astounding revelation if it could be proven. Until someone comes up with evidence for that, it is more reasonable to assume that it is a question of whether we have yet found a training mechanism that can lead to reasoning or not, not whether or not LLMs can learn to.

vundercind 3 days ago | parent [-]

It doesn’t follow that because a system is Turing complete the approach being used will eventually achieve reasoning.

vidarh 3 days ago | parent [-]

No, but that was also not the claim I made.

The point is that as the person I replied to pointed out, that LLM's are "next token predictors" is a meaningless dismissal, as they can be both next token predictors and Turing complete, and given that unless reasoning requires functions outside the Turing computable (we know of no way of constructing such functions, or no way for them to exist) calling them "next token predictors" says nothing about their capabilities.