Remix.run Logo
vidarh 6 days ago

> Probabilistic generative models are fun but no amount of probabilistic sequence generation can be a substitute for logical reasoning.

Unless you either claim that humans can't do logical reasoning, or claim humans exceed the Turing computable, then given you can trivially wire an LLM into a Turing complete system, this reasoning is illogical due to Turing equivalence.

And either of those two claims lack evidence.

voidhorse 5 days ago | parent | next [-]

Such a system redefines logical reasoning to the point that hardly any typical person's definition would agree.

It's Searle's Chinese Room scenario all over again, which everyone seems to have forgotten amidst the bs marketing storm around LLMs. A person with no knowledge of Chinese following a set of instructions and reading from a dictionary translating texts is a substitute for hiring a translator who understands chinese, however we would not claim that this person understands Chinese.

An LLM hooked up to a Turing Machine would be similar wrt to logical reasoning. When we claim someone reasons logically we usually don't imagine they randomly throw ideas at the wall and then consult outputs to determine if they reasoned logically. Instead, the process of deduction makes the line of reasoning decidedly not stochastic. I can't believe we've gotten to such a mad place that basic notions like that of logical deduction are being confused for stochastic processes. Ultimately, I would agree that it all comes back to the problem of other minds and you either take a fully reductionist stance and claim the brain and intellection is nothing more than probabilistic neural firing or you take a non-reductionist stance and assume there may be more to it. In either case, I think that claiming that LLMs+tools are equivalent to whatever process humans perform is kind of silly and severely underrated what humans are capable of^1.

1: Then again, this has been going on since the dawn of computing, which has always put forth its brain=computer metaphors more on grounds of reducing what we mean by "thought" than by any real substantively justified connection.

vidarh 2 days ago | parent | next [-]

Searle is an idiot. In Searle's argument, the translating entity will be the full system executing the translation "program", not the person running the program.

And you failed to understand my argument. You are a Turing machine. I am a Turing machine. The LLM in a loop is a Turing machine.

Unless you can show evidence that unlike the LLMs* we can execute more than the Turing computable, the theoretical limits on our reasoning are exactly the same as that of the LLM.

Absent any evidence at all that we can solve anything outside of the Turing computable, or that any computable function exists outside the Turing computable, the burden of proof is firmly in those making such an outrageous assumption to produce at least a single example of such a computation.

This argumebt doesn't mean any given LLM is capable of reasoning at the level of a human on its own any more than it means a given person is able to translate Chinese on its own, but it does mean there's no basis in any evidence for claiming no LLM can be made to reason just like a human any more than there's a basis for claiming no person can learn Chinese.

> When we claim someone reasons logically we usually don't imagine they randomly throw ideas at the wall and then consult outputs to determine if they reasoned logically

This isn't how LLMs work either, so this is entirely irrelevant.

bopjesvla 5 days ago | parent | prev | next [-]

The Chinese Room experiment has always been a hack thought experiment that was discussed in other forms before it was posited by Searle, most famously in Turing's "Can machines think?". Searle only superficially engaged with existing literature in the original Chinese Room paper. When he was forced to do so later on, Searle claimed that if you'd precisely simulate a Chinese human brain in a human-like robot, that brain still wouldn't be able to think or understand Chinese. Not a useful definition of thinking if you ask me.

From Wikipedia:

Suppose that the program simulated in fine detail the action of every neuron in the brain of a Chinese speaker.[83][w] This strengthens the intuition that there would be no significant difference between the operation of the program and the operation of a live human brain. Not a useful definition of thinking if you ask me.

Searle replies that such a simulation does not reproduce the important features of the brain—its causal and intentional states. He is adamant that "human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains."[26]

SpicyLemonZest 5 days ago | parent | prev [-]

> When we claim someone reasons logically we usually don't imagine they randomly throw ideas at the wall and then consult outputs to determine if they reasoned logically.

I definitely imagine that and I'm surprised to hear you don't. To me it seems obvious that this is how humans reason logically. When you're developing a complex argument, don't you write a sloppy first draft then review to check and clean up the logic?

voidhorse 5 days ago | parent [-]

I think you're mistaking my claim for something else. When I say logical reasoning here, I mean the dead simple reasoning that tells you that 1 + 1 - 1 = 1 or that, by definition, x <= y and y <= x imply x = y. You can reach these conclusions because you understand arithmetic or aspects of order theory and can use the basic definitions of those theories to deduce others. You don't need to throw random guesses at the wall to reach these conclusions or operationally execute an algorithm every time, because you use your understanding and logical reasoning to reach an immediate conclusion, but LLMs precisely don't do this. Maybe you memorize these facts instead of using logic, or maybe you consult Google each time but then I wouldn't claim that you understand arithmetic or order theory either.

vidarh 2 days ago | parent [-]

LLMs don't "throw random guesses at the wall" in this respect sky more than humans do.

11101010001100 6 days ago | parent | prev | next [-]

So we just need a lot of monkeys at computers?

sieabahlpark 6 days ago | parent | prev | next [-]

[dead]

godelski 6 days ago | parent | prev [-]

  > you can trivially wire an LLM into a Turing complete system
Please don't do the "the proof is trivial and left to the reader"[0].

If it is so trivial, show it. Don't hand wave, "put up or shut up". I think if you work this out you'll find it isn't so trivial...

I'm aware of some works but at least every one I know of has limitations that would not apply to LLMs. Plus, none of those are so trivial...

[0] https://en.wikipedia.org/wiki/Proof_by_intimidation

vidarh 2 days ago | parent [-]

You can do it yourself by setting temperature to zero and asking an LLM to execute the rules of a (2,3) Turing machine.

Since temperature zero makes it deterministic, you only need to test one step for each state and symbol combination.

Are you suggesting you don't believe you can't make a prompt that successfully encodes 6 trivial state transitions?

Either you're being intentionally obtuse, or you don't understand just how simple a minimal Turing machine is.

godelski 2 days ago | parent [-]

  > Are you suggesting you don't believe you can't make a prompt that successfully encodes 6 trivial state transitions?
Please show it instead of doubling down. It's trivial, right? So it is easier than responding to me. That'll end the conversation right here and now.

Do I think you can modify an LLM to be a Turing Machine, yeah. Of course. But at this point it doesn't seem like we're actually dealing with an LLM anymore. In other comments you're making comparisons to humans, are you suggesting humans are deterministic? If not, well I see a flaw with your proof.

vidarh 2 days ago | parent [-]

I've given an example prompt you can use as a basis in another comment, but let me double down, because it really matters that you seem to think this is a complex problem:

> That'll end the conversation right here and now.

We both know that isn't true, because it is so trivial that if you had any intention of being convinced, you'd have accepted the point already.

Do you genuinely want me to believe that you think an LLM can't act as a simple lookup from 6 keys (3 states, 2 symbols) to 6 tuples?

Because that is all it takes to show that an LLM + a loop can act like a Turing machine given the chance.

If you understand Turing machines, this is obvious. If you don't, even executing the steps personally per the example I gave in another comment is not likely to convince you.

> Do I think you can modify an LLM to be a Turing Machine, yeah. Of course.

There's no need to modify one. This can be done by enclosing an LLM in simple scaffolding, or you can play it out in a chat as long as you can set temperature to 0 (it will work without that as well to an extent, but you can't guarantee that it will keep working)

> But at this point it doesn't seem like we're actually dealing with an LLM anymore.

Are humans no longer human because we can act like a Turing machine?

The point is that anything that is Turing complete is computationally equivalent to anything else that is Turing complete, so demonstrating Turing-completeness is, absent any evidence that it is possible to compute functions outside the Turing computable, sufficient for it to be reasonable to assert equivalence in computational power.

The argument is not that any specific given LLM is capable of reasoning like a human, but to argue that there is no fundamental limit preventing LLMs from reasoning like a human.

> are you suggesting humans are deterministic?

I'm outright claiming we don't know of any mechanism by which we can calculate functions exceeding the Turing computable, nor have we ever seen evidence of it, nor do we know what that would even look like.

If you have any evidence that we can, or any evidence it is even possible - something that'd get you a Nobel Prize if you could show it - then by all means, enlighten us.