| ▲ | naasking 2 days ago |
| You're just assuming that mimicry of a thing is not equivalent to the thing itself. This isn't true of physical systems (simulated water doesn't get you wet!) but it is true of information systems (simulated intelligence is intelligence!). |
|
| ▲ | burnte 2 days ago | parent | next [-] |
| > You're just assuming that mimicry of a thing is not equivalent to the thing itself. I'm not assuming that, that's literally the definition of mimicry: to imitate closely. You might say I'm assuming that it is mimicking and not actually thinking, but there's no evidence it's actually thinking, and we know exactly what is IS doing because we created the code that we used to build the model. They're not thinking, it's doing math, mathematical transformations of data. |
| |
| ▲ | naasking 2 days ago | parent [-] | | > They're not thinking, it's doing math, mathematical transformations of data Whatever thinking fundamentally is, it also has an equivalence as a mathematical transformation of data. You're assuming the conclusion by saying that the two mathematical transformations of data are not isomorphic. A simulation of information processing is still information processing, just like running Windows in a QEMU VM is still running Windows. | | |
| ▲ | burnte 2 days ago | parent [-] | | > Whatever thinking fundamentally is, it also has an equivalence as a mathematical transformation of data. Do not confuse the mathematical description of physical processes as the world being made of math. > You're assuming the conclusion by saying that the two mathematical transformations of data are not isomorphic. Correct. They're not isomorphic. One is simple math that runs on electrified sand, and one is an unknown process that developed independently across a billion years. Nothing we're doing with AI today is even close to real thought. There are a billion trivial proofs that make the rounds as memes, like one R in strawberry, or being unable to count, etc. | | |
| ▲ | naasking 2 days ago | parent [-] | | > Do not confuse the mathematical description of physical processes as the world being made of math. Again, this doesn't apply to information. A simulation of a computation really is equivalent to that computation. > One is simple math that runs on electrified sand, and one is an unknown process that developed independently across a billion years. Right, so you admit that it's an unknown process, which means you literally cannot conclude that it is different to what LLMs are doing. > There are a billion trivial proofs that make the rounds as memes, like one R in strawberry, or being unable to count, etc. No, none of these are definitive proofs that they are not thinking. LLM "perceptions" are tokens, the strawberry question is basically asking it to figure out something that's below it's perceptual range. This has literally nothing to do with whether the way it processes information is or is not thinking. | | |
| ▲ | burnte 2 days ago | parent [-] | | > Right, so you admit that it's an unknown process, which means you literally cannot conclude that it is different to what LLMs are doing. If you truly feel human thinking and LLMs share more than a cocktail napkin's worth of similarity, I don't know what to say. Just treating it like a black box I can prove in minutes it's not thinking. Come on. I really don't get why people are so emotionally involved in this stuff. It's not thinking. It's ok that it's not thinking. Maybe someday we'll get there, but it's not today. | | |
| ▲ | naasking 2 days ago | parent [-] | | No you can't. You're just not proving what you think you're proving. |
|
|
|
|
|
|
| ▲ | Tade0 2 days ago | parent | prev [-] |
| But a simulated mind is not a mind. This was already debated years ago with the aid of the Chinese Room thought experiment. |
| |
| ▲ | dkural 2 days ago | parent | next [-] | | The Chinese Room experiment applies equally well to our own brains - in which neuron does the "thinking" reside exactly? Searle's argument has been successfully argued against in many different ways. At the end of the day - you're either a closet dualist like Searle, or if you have a more scientific view and are a physicalist (i.e. brains are made of atoms etc. and brains are sufficient for consciousness / minds) you are in the same situation as the Chinese Room: things broken down into tissues, neurons, molecules, atoms. Which atom knows Chinese? | | |
| ▲ | Tade0 2 days ago | parent [-] | | The whole point of this experiment was to show that if we don't know whether something is a mind, we shouldn't assume it is and that our intuition in this regard is weak. I know I am a mind inside a body, but I'm not sure about anyone else. The easiest explanation is that most of the people are like that as well, considering we're the same species and I'm not special. You'll have to take my word on that, as my only proof for this is that I refuse to be seen as anything else. In any case LLMs most likely are not minds due to the simple fact that most of their internal state is static. What looks like thoughtful replies is just the statistically most likely combination of words looking like language based on a function with a huge number of parameters. There's no way for this construct to grow as well as to wither - something we know minds definitely do. All they know is a sequence of symbols they've received and how that maps to an output. It cannot develop itself in any way and is taught using a wholly separate process. | | |
| ▲ | dkural an hour ago | parent | next [-] | | I am arguing against Searle's Chinese Room argument, I am not positing that LLMs are minds. I am specifically refuting that your brain and the Chinese room can be both subject to the same reductionist argument Searle uses - if we accept, as you say, that you are a mind inside a body, which neuron, or atom does this mind reside in? My point is, if you accept Searle's argument, you have to accept it for brains, including your brain, as well. Now, separately, you are precisely the type of closet dualist I speak of. You say that you are a mind inside a body, but you have no way of knowing that others have minds -- take this to it's full conclusion: You have no way of knowing that you have a "mind" either. You feel like you do, as a biological assembly (which is what you are). Either way you believe in some sort of body-mind dualism, without realizing. Minds are not inside of bodies. What you call a mind is a potential emergent phenomenon of a brain. (potential - because brains get injured etc.). | |
| ▲ | naasking 2 days ago | parent | prev [-] | | > In any case LLMs most likely are not minds due to the simple fact that most of their internal state is static. This is not a compelling argument. Firstly, you can add external state to LLMs via RAG and vector databases, or various other types of external memory, and their internal state is no longer static and deterministic (and they become Turing complete!). Second if you could rewind time, then your argument suggests that all other humans would not have minds because you could access the same state of mind at that point in time (it's static). Why would you travelling through time suddenly erases all other minds in reality? The obvious answer is that it doesn't, those minds exist as time moves forward and then they reset when you travel backwards, and the same would apply to LLMs if they have minds, eg. they are active minds while they are processing a prompt. | | |
| ▲ | Tade0 2 days ago | parent [-] | | > and their internal state is no longer static and deterministic (and they become Turing complete!). But it's not the LLM that makes modifications in those databases - it just retrieves data which is already there. > Why would you travelling through time suddenly erases all other minds in reality? I'm not following you here. > they are active minds while they are processing a prompt. Problem is that this process doesn't affect the LLM in the slightest. It just regurgitates what it's been taught. An active mind is makes itself. It's curious, it gets bored, it's learning constantly. LLMs do none of that. You couldn't get a real mind to answer the same question hundreds of times without it being changed by that experience. | | |
| ▲ | naasking 2 days ago | parent [-] | | > But it's not the LLM that makes modifications in those databases - it just retrieves data which is already there. So what? > I'm not following you here. If you're time travelling, you're resetting the state of the world to some previous well-defined, static state. An LLM also starts from some well-defined static state. You claim this static configuration means there's no mind, so this entails that the ability to time travel means that every person who is not time travelling has no mind. > Problem is that this process doesn't affect the LLM in the slightest. It just regurgitates what it's been taught. An active mind is makes itself. People who are incapable forming new memories thus don't have minds? https://en.wikipedia.org/wiki/Anterograde_amnesia |
|
|
|
| |
| ▲ | naasking 2 days ago | parent | prev | next [-] | | > But a simulated mind is not a mind. This was already debated years ago with the aid of the Chinese Room thought experiment. Yes, debated and refuted. There are many well known and accepted rebuttals of the Chinese Room. The Chinese Room as a whole does understand Chinese. | |
| ▲ | echelon 2 days ago | parent | prev [-] | | > But a simulated mind is not a mind. How would the mind know which one it is? Maybe your mind is being simulated right now. | | |
| ▲ | Tade0 2 days ago | parent [-] | | > How would the mind know which one it is? I'm not assuming it is without hard proof - that's my only argument. > Maybe your mind is being simulated right now. I'm experiencing consciousness right now, so that would have to be a damn good simulation. |
|
|