Remix.run Logo
Tade0 2 days ago

But a simulated mind is not a mind. This was already debated years ago with the aid of the Chinese Room thought experiment.

dkural 2 days ago | parent | next [-]

The Chinese Room experiment applies equally well to our own brains - in which neuron does the "thinking" reside exactly? Searle's argument has been successfully argued against in many different ways. At the end of the day - you're either a closet dualist like Searle, or if you have a more scientific view and are a physicalist (i.e. brains are made of atoms etc. and brains are sufficient for consciousness / minds) you are in the same situation as the Chinese Room: things broken down into tissues, neurons, molecules, atoms. Which atom knows Chinese?

Tade0 2 days ago | parent [-]

The whole point of this experiment was to show that if we don't know whether something is a mind, we shouldn't assume it is and that our intuition in this regard is weak.

I know I am a mind inside a body, but I'm not sure about anyone else. The easiest explanation is that most of the people are like that as well, considering we're the same species and I'm not special. You'll have to take my word on that, as my only proof for this is that I refuse to be seen as anything else.

In any case LLMs most likely are not minds due to the simple fact that most of their internal state is static. What looks like thoughtful replies is just the statistically most likely combination of words looking like language based on a function with a huge number of parameters. There's no way for this construct to grow as well as to wither - something we know minds definitely do. All they know is a sequence of symbols they've received and how that maps to an output. It cannot develop itself in any way and is taught using a wholly separate process.

dkural an hour ago | parent | next [-]

I am arguing against Searle's Chinese Room argument, I am not positing that LLMs are minds. I am specifically refuting that your brain and the Chinese room can be both subject to the same reductionist argument Searle uses - if we accept, as you say, that you are a mind inside a body, which neuron, or atom does this mind reside in? My point is, if you accept Searle's argument, you have to accept it for brains, including your brain, as well.

Now, separately, you are precisely the type of closet dualist I speak of. You say that you are a mind inside a body, but you have no way of knowing that others have minds -- take this to it's full conclusion: You have no way of knowing that you have a "mind" either. You feel like you do, as a biological assembly (which is what you are). Either way you believe in some sort of body-mind dualism, without realizing. Minds are not inside of bodies. What you call a mind is a potential emergent phenomenon of a brain. (potential - because brains get injured etc.).

naasking 2 days ago | parent | prev [-]

> In any case LLMs most likely are not minds due to the simple fact that most of their internal state is static.

This is not a compelling argument. Firstly, you can add external state to LLMs via RAG and vector databases, or various other types of external memory, and their internal state is no longer static and deterministic (and they become Turing complete!).

Second if you could rewind time, then your argument suggests that all other humans would not have minds because you could access the same state of mind at that point in time (it's static). Why would you travelling through time suddenly erases all other minds in reality?

The obvious answer is that it doesn't, those minds exist as time moves forward and then they reset when you travel backwards, and the same would apply to LLMs if they have minds, eg. they are active minds while they are processing a prompt.

Tade0 2 days ago | parent [-]

> and their internal state is no longer static and deterministic (and they become Turing complete!).

But it's not the LLM that makes modifications in those databases - it just retrieves data which is already there.

> Why would you travelling through time suddenly erases all other minds in reality?

I'm not following you here.

> they are active minds while they are processing a prompt.

Problem is that this process doesn't affect the LLM in the slightest. It just regurgitates what it's been taught. An active mind is makes itself. It's curious, it gets bored, it's learning constantly. LLMs do none of that.

You couldn't get a real mind to answer the same question hundreds of times without it being changed by that experience.

naasking 2 days ago | parent [-]

> But it's not the LLM that makes modifications in those databases - it just retrieves data which is already there.

So what?

> I'm not following you here.

If you're time travelling, you're resetting the state of the world to some previous well-defined, static state. An LLM also starts from some well-defined static state. You claim this static configuration means there's no mind, so this entails that the ability to time travel means that every person who is not time travelling has no mind.

> Problem is that this process doesn't affect the LLM in the slightest. It just regurgitates what it's been taught. An active mind is makes itself.

People who are incapable forming new memories thus don't have minds?

https://en.wikipedia.org/wiki/Anterograde_amnesia

naasking 2 days ago | parent | prev | next [-]

> But a simulated mind is not a mind. This was already debated years ago with the aid of the Chinese Room thought experiment.

Yes, debated and refuted. There are many well known and accepted rebuttals of the Chinese Room. The Chinese Room as a whole does understand Chinese.

echelon 2 days ago | parent | prev [-]

> But a simulated mind is not a mind.

How would the mind know which one it is?

Maybe your mind is being simulated right now.

Tade0 2 days ago | parent [-]

> How would the mind know which one it is?

I'm not assuming it is without hard proof - that's my only argument.

> Maybe your mind is being simulated right now.

I'm experiencing consciousness right now, so that would have to be a damn good simulation.