▲ | Zarathruster 2 hours ago | |
Sorry, I've reread this a few times and I'm not sure which part of Searle's argument you think I mischaracterized. Could you clarify? For emphasis: > "consciousness can't be instantiated purely in language" (mine) > "we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else" (Searle) I get that the mapping isn't 1:1 but if you think the loss of precision is significant, I'd like to know where. > Unfortunately, it doesn't seem to me to have proven anything; it's merely made an accurate analogy for how a computer works. So, if "semantics" and "understanding" can live in <processor, program, state> tuples, then the Chinese Room as a system can have semantics and understanding, as can computers; and if "semantics" and "understanding" cannot live in <processor, program, state> tuples, then neither the Chinese Room nor computers can have understanding. There's a lot of debate on this point elsewhere in the thread, but Searle's response to this particular objection is here: https://plato.stanford.edu/entries/chinese-room/#SystRepl | ||
▲ | gwd 2 hours ago | parent [-] | |
> I get that the mapping isn't 1:1 but if you think the loss of precision is significant, I'd like to know where. I'm by far an expert in this; my knowledge of the syntax / semantics distinction primarily comes from discussions w/ ChatGPT (and a bit from my friend who is a Catholic priest, who had some training in philosophy). But, the quote says "purely formally or syntactically". My understanding is that Searle (probably thinking about the Prolog / GPS-type attempts at logical artificial intelligence prevalent in the 70's and 80's) is thinking of AI in terms of pushing symbols around. So, in this sense, the adder circuit in a processor doesn't semantically add numbers; it only syntactically adds numbers. When you said, "consciousness can't be instantiated purely in language", I took you to mean human language; it seems to leave the door open to consciousness (and thus semantics) being instantiated by a computer program in some other way. Whereas, the quote from Searle very clearly says, "...the computer program by itself is not sufficient for consciousness..." (emphasis mine) -- seeming to rule out any possible computer program, not just those that work at the language level. > There's a lot of debate on this point elsewhere in the thread, but Searle's response to this particular objection is here: I mean, yeah, I read that. Let me quote the relevant part for those reading along: > Searle’s response to the Systems Reply is simple: in principle, he could internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols”. The man would now be the entire system, yet he still would not understand Chinese. For example, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax. I mean, it sounds to me like Searle didn't understand the "Systems Response" argument; because as the end of that section says, he's just moved the program and state part of the <procesor, program, state> tuple out of the room and into his head. The fact that the processor (Searle's own conscious mind) is now storing the program and the state in his own memory rather than externally doesn't fundamentally change the argument: If that tuple can "understand" things, then computers can "understand" things; and if that tuple can't "understand" things, then computers can't "understand" things. One must, of course, be humble when saying of a world-renowned expert, "He didn't understand the objection to his argument". But was Searle himself a programmer? Did he ever take a hard drive out of one laptop, pop it into another, and have the experience of the same familiar environment? Did he ever build an adder circuit, a simple register system, and a simple working computer out of logic gates, and see it suddenly come to life and execute programs? If he had, I can't help but think his intuitions regarding the syntax / semantic distinction would be different. EDIT: I mean, I'm personally a Christian, and do believe in the existence of eternal souls (though I'm not sure exactly what those look like). But I'm one of those annoying people who will quibble with an argument whose conclusion I agree with (or to which I am sympathetic), because I don't think it's actually a good argument. |