| ▲ | slowmovintarget 11 hours ago |
| There is no translation going on in that thought experiment, though. There is text processing. That is, the man in the room receives Chinese text through a slot in the door. He uses a book of complex instructions that tells him what to do with that text, and he produces more Chinese text as a response according to those instructions. Neither the man, nor the room "understand" Chinese. It is the same for the computer and its software. Jeffery Hinton has sad "but the system understands Chinese." I don't think that's a true statement, because at no point is the "system" dealing with semantic context of the input. It only operates algorithmically on the input, which is distinctly not what people do when they read something. Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument. |
|
| ▲ | tsimionescu 6 hours ago | parent | next [-] |
| > I don't think that's a true statement, because at no point is the "system" dealing with semantic context of the input. It only operates algorithmically on the input, which is distinctly not what people do when they read something. There are two possibilities here. Either the Chinese room can produce the exact same output as some Chinese speaker would given a certain input, or it can't. If it can't, the whole thing is uninteresting, it simply means that the rules in the room are not sufficient and so the conclusion is trivial. However, if it can produce the exact same output as some Chinese speaker, then I don't see by what non-spiritualistic criteria anyone could argue that it is fundamentally different from a Chinese speaker. Edit: note that here when I'm saying that the room can respond with the same output as a human Chinese speaker, that includes the ability for the room to refuse to answer a question, to berate the asker, to start musing about an old story or other non-sequiturs, to beg for more time with the asker, to start asking the akser for information, to gossip about previous askers, and so on. Basically the full range of language interactions, not just some LLM style limited conversation. The only limitations in its responses would be related to the things it can't physically do - it couldn't talk about what it actually sees or hears, because it doesn't have eyes, or ears, it couldn't truthfully say it's hungry, etc. It would be limited to the output of a blind, deaf, mute Chinese speaker confined to a room whose skin is numb and who is being fed intravenously, etc. |
| |
| ▲ | netdevphoenix 2 hours ago | parent [-] | | > if it can produce the exact same output as some Chinese speaker, then I don't see by what non-spiritualistic criteria anyone could argue that it is fundamentally different from a Chinese speaker. Indeed. The crux of the debate is: a) how many input and response pairs are needed to agree that the rule-provider plus the Chinese room operation is fundamentally equal/different to a Chinese speakers b) what topics can we agree to exclude so that if point a can be passed with the given set of topics we can agree that 'the rule-provider plus the Chinese room operation' is fundamentally equal/different to a Chinese speaker | | |
| ▲ | tsimionescu an hour ago | parent [-] | | As far as I can see, Searle rejects the whole concept, and claims that by construction, it is obvious that the Chinese room doesn't understand Chinese in the same way that a speaker does, regardless of how well it can mimic Chinese speech. | | |
| ▲ | netdevphoenix 34 minutes ago | parent [-] | | > claims that by construction, it is obvious that Sounds like circular logic to me unless you make that assumption explicit |
|
|
|
|
| ▲ | randallsquared 10 hours ago | parent | prev | next [-] |
| > It only operates algorithmically on the input, which is distinctly not what people do when they read something. That's not at all clear! > Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument. All of that is called into question with some LLM output. It's hard to understand how some of that could be produced without some emergency model of the world. |
| |
| ▲ | slowmovintarget 10 hours ago | parent [-] | | In the thought experiment as constructed it is abundantly clear. It's the point. LLM output doesn't call that into question at all. Token production through distance function in high-dimensional vector representation space of language tokens gets you a long way. It doesn't get you understanding. I'll take Penrose's notions that consciousness is not computation any day. | | |
| ▲ | Cogito 10 hours ago | parent | next [-] | | Out of interest, what do you think it would look like if communicating was algorithmic? I know that it doesn't feel like I am doing anything particularly algorithmic when I communicate but I am not the hommunculus inside me shuffling papers around so how would I know? | | |
| ▲ | jacquesm 9 hours ago | parent [-] | | I think it would end inspiration. | | |
| ▲ | adastra22 6 hours ago | parent [-] | | Inspiration is what a search algorithm feels like from the inside. | | |
| ▲ | jacquesm 6 hours ago | parent [-] | | Can you elaborate? | | |
| ▲ | adastra22 3 hours ago | parent [-] | | This goes far to explain a lot of Chinese room situations. We have an intuition for the way something is. That intuition is an unshakeable belief, because it is something that we feel directly. We know what it feels like to understand Chinese (or French, or English, or whatever), and that little homunculus shuffling papers around doesn't feel like it. Hopefully we have all experienced what genuine inspiration feels like, and we all know that experience. It sure as hell doesn't feel like a massively parallel search algorithm. If anything it probably feels like a bolt of lightning, out of the blue. But here's the thing. If the conscious loop inside your brain is something like the prefrontal cortex, which integrates and controls deeper processing systems outside of conscious reach, then that is exactly what we should expect a search algorithm to feel like. You -- that strange conscious loop I am talking to -- are doing the mapping (framing the problem) and the reducing (recognizing the solution), but not the actual function application and lower level analysis that generated candidate solutions. It feels like something out of the blue, hardly sought for, which fits all the search requirements. Genuine inspiration. But that's just what it feels like from the inside, to be that recognizing agent that is merely responding to data being fed up to it from the mess of neural connections we call the brain. You can take this insight a step further, and recognize that many of the things that seem intuitively "obvious" are actually artifacts of how our thinking brains are constructed. The Chinese room and the above comment about inspiration are only examples. I cannot emphasize enough how much I dislike linking to LessWrong, and to Yudkowsky in particular, but I first picked up on this from an article there, and credit should be given where credit is due: https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-alg... | | |
| ▲ | jacquesm 2 hours ago | parent [-] | | Fascinating, thank you very much, and agreed on Yudkowsky. It's a bit like crediting Wolfram. |
|
|
|
|
| |
| ▲ | randallsquared 2 hours ago | parent | prev [-] | | I should have snipped the "it operates" part to communicate better. I meant that it's not at all clear that people are doing something non-algorithmic. |
|
|
|
| ▲ | ozy 7 hours ago | parent | prev [-] |
| That is why you cannot ask the room for semantic changes. Like “if I call an umbrella a monkey, and it will rain today, what do I need to bring?” Unless we suppose those books describe how to implement a memory of sorts, and how to reason, etc. But then how sure are we it’s not conscious? |
| |
| ▲ | adastra22 6 hours ago | parent [-] | | > if I call an umbrella a monkey, and it will rain today, what do I need to bring? I'm not even sure what you are asking for, tbh, so any answer is fine. |
|