| |
| ▲ | sharts an hour ago | parent | next [-] | | I remember the guy saying that disembodied AI couldn’t possibly understand meaning. We see this now with LLMs. They just generate text. They get more accurate over time. But how can they understand a concept such as “soft” or “sharp” without actual sensory data with which to understand the concept and varying degrees of “softness” or “sharpness.” The fact is that they can’t. Humans aren’t symbol manipulation machines. They are metaphor machines. And metaphors we care about require a physical basis on one side of that comparison to have any real fundamental understanding of the other side. Yes, you can approach human intelligence almost perfectly with AI software. But that’s not consciousness. There is no first person subjective experience there to give rise to mental features. | | |
| ▲ | lostmsu an hour ago | parent [-] | | > I remember the guy saying that disembodied AI couldn’t possibly understand meaning. This is not a theory (or is one, but false) according to Popper as far as I understand, because the only way to check understanding that I know of is to ask questions, and LLMs passes it. So in order to satisfy falsifiability another test must be devised. |
| |
| ▲ | mjburgess 5 hours ago | parent | prev | next [-] | | > this simulation of a brain, which wouldn't think, to a simulation of a fire, which can't burn down a real building > with no clear reason whatsoever as to why It's not clear to me how you can understand that fire has particular causal powers (to burn, and so on) that are not instantiated in a simulation of fire; and yet not understand the same for biological processes. The world is a particular set of causal relationships. "Computational" descriptions do not have a causal semantics, so aren't about properties had in the world. The program itself has no causal semantics, it's about numbers. A program which computes the fibonacci sequence describes equally-well the growth of a sunflower's seeds and the agglomeration of galactic matter in certain galaxies. A "simulation" is, by definition, simply an accounting game by which a series of descriptive statements can be derived from some others -- which necessarily, lacks the causal relations of what is being described. A simulation of fire is, by definition, not on fire -- that is fire. A simulation is a game to help us think about the world: the ability to derive some descriptive statements about a system without instantiating the properties of that system is a trivial thing, and it is always disappointing at how easily it fools our species. You can move beads of wood around and compute the temperature of the sun -- this means nothing. | | |
| ▲ | mattclarkdotnet 4 hours ago | parent | next [-] | | Because simulated fire burns other things in the simulation just as much as “real” fire burns real things. Searle &co assert that there is a real world that has special properties, without providing any way to show that we are living in it | | |
| ▲ | mjburgess 4 hours ago | parent [-] | | > Because simulated fire burns other things in the simulation just as much as “real” fire burns real things. What we mean by a simulation is, by definition, a certain kind of "inference game" we play (eg., with beads and chalk) that help us think about the world. By definition, if that simulation has substantial properties, it isn't a simulation. If the claim is that an electrical device can implement the actual properties of biological intelligence, then the claim is not about a simulation. It's that by manufacturing some electrical system, plugging various devices into it, and so on -- that this physical object has non-simulated properties. Searle, and most other scientific naturalists who appreciate the world is real -- are not ruling out that it could be possible to manufacture a device with the real properties of intelligence. It's just that merely by, eg., implementing the fibonacci sequence, you havent done anything. A computation description doesnt imply any implementation properties. Further, when one looks at the properties of these electronic systems and the kinds of causal realtions they have with their environments via their devices, one finds very many reasons to suppose that they do not implement the relevant properties. Just as much as when one looks at a film strip under a microscope, one discovers that the picture on the screen was an illusion. Animals are very easily fooled, apes most of all -- living as we do in our own imaginations half the time. Science begins when you suspend this fantasy way of relating to the world, look it its actual properties. If your world view requires equivocating between fantasy and reality, then sure, anything goes. This is a high price to pay to cling on to the idea that the film is real, and there's a train racing towards you in your cinema seat. |
| |
| ▲ | tsimionescu 3 hours ago | parent | prev | next [-] | | There is a massive difference between chemical processes, like fire, and computational processes, which thinking likely is. A computer can absolutely be made to interact with the world in a way that assigns real physical meaning to the symbols it manipulates, a meaning entirely independent of any conscious being. For example, the computer that powers an automatic door has a clear meaning for its symbols intrinsic in its construction. Saying that the symbols in the computer don't mean anything, that it is only we who give them meaning, presupposes a notion of meaning as something that only human beings and some things similar to us possess. It is an entirely circular argument, similarly to the notion of p-zombies or the experience of seizing red thought experiment. If indeed the brain is a biological computer, and if our mind, our thinking, is a computation carried out by this computer, with self-modeling abilities we call "qualia" and "consciousness", then none of these arguments hold. I fully admit that this is not at all an established fact, and we may still find out that our thinking is actually non-computational - though it is hard to imagine how that could be. | | |
| ▲ | mjburgess 3 hours ago | parent [-] | | There are no such things as "computational processes". Any computational description of reality describes vastly different sets of casual relata, nothing which exists in the real world is essentially a computational process -- everything is essential causal, with a circumstantially useful computational description. | | |
| ▲ | tsimionescu 2 hours ago | parent | next [-] | | On the contrary, computation is a very clear physical phenomenon, well understood and studied, so well understood that we can build machines to do it. And, again, those machines don't need any interpretation - they do measurable things in the real world, such as opening doors and cutting parts. | | |
| ▲ | mjburgess an hour ago | parent [-] | | I have never encountered this physical process. Here I am typing on a keyboard which is powered through an electrical field that is guided by a peice of wire under each key -- whose operation, when mechanically activated, is to induce some electrical state in some switches it is connected to, and so on. I associate the key with "K", and my screen displays a "K" shape when it is pressed -- but there is no "K", this is all in my head. Just as much as when I go to the cinema and see people on the screen: there are no people. By ascribing a computational description to a series of electrical devices (whose operation distributes power, etc.) I can use this system to augment by own thinking. Absent the devices, the power distribution, their particular casual relationships to each other, there is no computer. The computational description is an observer-relative attribution to a system; there are no "physical" properties which are computational. All physical properties concern spatio-temporal bodies and their motion. The real dualism is to suppose there are such non-spatio-temporal "process". The whole system called a "computer" is an engineered electrical device whose construction has been designed to achive this illusion. Likewise I can describe the solar system as a computational process, just discretize orbits and give their transition in a while(true) loop. That very same algorithm describes almost everything. Physical processes are never "essentially" computational; this is just a way of specifying some highly superficial feature which allows us to ignore their causal properties. Its mostly a useful description when building systems, ie., an engineering fiction. |
| |
| ▲ | close04 39 minutes ago | parent | prev [-] | | Talking about simulations misses a critical aspect. The only thing that can accurately simulate a process or system is the actual, real process or system. Everything else contains simplifications and approximations. Fire is the result of the intrinsic reactivity of some chemicals like fuels and oxidizers that allows them to react and generate heat. A simulation of fire that doesn't generate heat is missing a big part of the real thing, it's very simplified. Compared to real fire, a simulation is closer to a fire emoji, both just depictions of a fire. A fire isn't the process of calculating what happens, it's molecules reacting a certain way, in a well understood and predictable process. But if your simulation is accurate and does generate heat then it can burn down a building by extending the simulation into the real world with a non-simulated fire. Consciousness is an emergent property from putting together a lot of neurons, synapses, chemical and physical processes. So you can't analyze the parts to simulate the end result. You cannot look at the electronic neuron and conclude a brain accurately made of them won't generate consciousness. It might generate something even bigger, or nothing. And in a very interesting twist of the mind, if an accurate simulation of a fire can extend in the real world as a real fire, then why wouldn't an accurate simulation of a consciousness extent in the real world as a real consciousness? |
|
| |
| ▲ | bondarchuk 2 hours ago | parent | prev [-] | | >A "simulation" is, by definition, simply an accounting game by which a series of descriptive statements can be derived from some others -- which necessarily, lacks the causal relations of what is being described. This notion of causality is interesting. When a human claims that he is conscious, there a causal chain from the fact that they are conscious to their claiming so. When a neuron-level simulation of a human claims it is conscious, there must be a similar causal chain, with a similar fact at its origin. |
| |
| ▲ | dvt 4 hours ago | parent | prev | next [-] | | > while he accuses others of supporting cartesian dualism when they think the brain and the mind can be separated, that you can "run" the mind on a different substrate His views are perfectly consistent with non-dualism and if you think his views are muddy, that doesn't mean they are (they are definitively not muddy, per a large consensus). For the record, I am a substance dualist, and his arguments against dualism are pretty interesting, precisely because he argues that you can build something that functions in a different way than symbol manipulation while still doing something that looks like symbol manipulation (but also has this special property called consciousness, kind of like our brains). Is this true? I don't know (I, of course, would argue "no"), but it does seem at least somewhat plausible and there's no obvious counter-argument. | | |
| ▲ | tsimionescu 3 hours ago | parent [-] | | I don't see how his views can be made sense of without dualism. He believed very much in this concept of qualia as some special property, and in the logical coherence of the concept of p-zombies, beings that would exactly like a conscious being but without having qualia. This simply makes no sense unless you believe that consciousness is a non-physical property, one that the physical world acts upon but which can't itself act back upon it (as otherwise, there would obviously have to be some kind of meaningful physical difference between the being that possesses it and the being that doesn't). | | |
| ▲ | dvt an hour ago | parent [-] | | > This simply makes no sense unless you believe that consciousness is a non-physical property It does make sense, and there's work being done on this front, (Penrose & Hameroff's Orch OR comes to mind). We obviously don't know exactly what such a mechanism would look like, but the theory itself is not inconsistent. Also, there's all kinds of p-zombies, so we likely need some specificity here. |
|
| |
| ▲ | jll29 4 hours ago | parent | prev | next [-] | | Hardware and software are of course equivalent, as every computer science (but not every philosopher) knows. D.R. Hofstadter posited that we can extract/separate the software from the hardware it runs on (the program-brain dichotomy), whereas Searle believed that these were not two
layers but consciousness was in effect a property of the hardware. And from that, as you say, follows that you may re-create the property if your replica hardware is close enough to the real brain. IMHO, philosophers should be rated by the debate their ideas create, and by that, Searle was part of the top group. | |
| ▲ | Zarathruster 3 hours ago | parent | prev | next [-] | | > No, his argument is that consciousness can't be instantiated purely in software, that it requires specialized hardware. Language is irrelevant, it was only an example. It's by no means irrelevant- the syntax vs. semantics distinction at the core of his argument makes little sense if we leave out language:
https://plato.stanford.edu/entries/chinese-room/#SyntSema Side note: while the Chinese Room put him on the map, he had as much to say about Philosophy of Language as he did of Mind. It was of more than passing interest to him. > Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why. I've never heard him say any such thing, nor read any word he's written attesting to this belief. If you have a source then by all means provide it. I have, however, heard him say the following: 1. The structure and arrangement of neurons in the human nervous system creates consciousness. 2. The exact causal mechanism for this is phenomenon is unknown. 3. If we were to engineer a set of circumstances such that the causal mechanism for consciousness (whatever it may be) were present, we would have to conclude that the resulting entity- be it biological, mechanical, etc., is conscious. He didn't have anything definitive to say about the causal mechanism of consciousness, and indeed he didn't see that as his job. That was to be an exercise left to the neuroscientists, or in his preferred terminology, "brain stabbers." He was confident only in his assertion that it couldn't be caused by mere symbol manipulation. > it is in fact obvious he held dualistic notions where there is something obviously special about the mind-brain interaction that is not purely computational. He believed that consciousness is an emergent state of the brain, much like an ice cube is just water in a state of frozenness. He explains why this isn't just warmed over property dualism: https://faculty.wcas.northwestern.edu/paller/dialogue/proper... | | |
| ▲ | tsimionescu an hour ago | parent [-] | | > It's by no means irrelevant- the syntax vs. semantics distinction at the core of his argument makes little sense if we leave out language: https://plato.stanford.edu/entries/chinese-room/#SyntSema The Chinese room is an argument caked in notions of language, but it is in fact about consciousness more broadly. Syntax and semantics are not merely linguistic concepts, though they originate in that area. And while Searle may have been interested in language as well, that is not what this particular argument is mainly about (the title of the article is Minds, Brains, and Programs - the first hint that it's not about language). > I've never heard him say any such thing, nor read any word he's written attesting to this belief. If you have a source then by all means provide it. He said both things in the paper that introduced the Chinese room concept, as an answer to the potential rebuttals. Here is a quote about the brain that would be run in software: > 3. The Brain Simulator reply (MIT and Berkley) > [...] The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological causal properties. And here is the bit about creating a real electrical brain, that he considers could be conscious: > "Yes, but could an artifact, a man-made machine, think?" > Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. > He believed that consciousness is an emergent state of the brain, much like an ice cube is just water in a state of frozenness. He explains why this isn't just warmed over property dualism: https://faculty.wcas.northwestern.edu/paller/dialogue/proper... I don't find this paper convincing. He admits at every step that materialism makes more sense, and then he asserts that still, consciousness is not ontologically the same thing as the neurobiological states/phenomena that create it. He admits that usually being causally reducible means being ontologically reducible as well, but he claims this is not necessarily the case, without giving any other example or explanation as to what justifies this distinction. I am simply not convinced. |
| |
| ▲ | xtiansimon 3 hours ago | parent | prev [-] | | >> “His argument is much narrower: consciousness can't be instantiated purely in language.” > “No, his argument is that consciousness can't be instantiated purely in software…“ The confusion is very interesting to me, maybe because I’m a complete neophyte on the subject. That said, I’ve often wondered if consciousness is necessarily _embodied_ or emerged from pure presence into language & body. Maybe the confusion is intentional? |
|
| |
| ▲ | Zarathruster 2 hours ago | parent [-] | | Sorry, I've reread this a few times and I'm not sure which part of Searle's argument you think I mischaracterized. Could you clarify? For emphasis: > "consciousness can't be instantiated purely in language" (mine) > "we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else" (Searle) I get that the mapping isn't 1:1 but if you think the loss of precision is significant, I'd like to know where. > Unfortunately, it doesn't seem to me to have proven anything; it's merely made an accurate analogy for how a computer works. So, if "semantics" and "understanding" can live in <processor, program, state> tuples, then the Chinese Room as a system can have semantics and understanding, as can computers; and if "semantics" and "understanding" cannot live in <processor, program, state> tuples, then neither the Chinese Room nor computers can have understanding. There's a lot of debate on this point elsewhere in the thread, but Searle's response to this particular objection is here:
https://plato.stanford.edu/entries/chinese-room/#SystRepl | | |
| ▲ | gwd 2 hours ago | parent [-] | | > I get that the mapping isn't 1:1 but if you think the loss of precision is significant, I'd like to know where. I'm by far an expert in this; my knowledge of the syntax / semantics distinction primarily comes from discussions w/ ChatGPT (and a bit from my friend who is a Catholic priest, who had some training in philosophy). But, the quote says "purely formally or syntactically". My understanding is that Searle (probably thinking about the Prolog / GPS-type attempts at logical artificial intelligence prevalent in the 70's and 80's) is thinking of AI in terms of pushing symbols around. So, in this sense, the adder circuit in a processor doesn't semantically add numbers; it only syntactically adds numbers. When you said, "consciousness can't be instantiated purely in language", I took you to mean human language; it seems to leave the door open to consciousness (and thus semantics) being instantiated by a computer program in some other way. Whereas, the quote from Searle very clearly says, "...the computer program by itself is not sufficient for consciousness..." (emphasis mine) -- seeming to rule out any possible computer program, not just those that work at the language level. > There's a lot of debate on this point elsewhere in the thread, but Searle's response to this particular objection is here: I mean, yeah, I read that. Let me quote the relevant part for those reading along: > Searle’s response to the Systems Reply is simple: in principle, he could internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols”. The man would now be the entire system, yet he still would not understand Chinese. For example, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax. I mean, it sounds to me like Searle didn't understand the "Systems Response" argument; because as the end of that section says, he's just moved the program and state part of the <procesor, program, state> tuple out of the room and into his head. The fact that the processor (Searle's own conscious mind) is now storing the program and the state in his own memory rather than externally doesn't fundamentally change the argument: If that tuple can "understand" things, then computers can "understand" things; and if that tuple can't "understand" things, then computers can't "understand" things. One must, of course, be humble when saying of a world-renowned expert, "He didn't understand the objection to his argument". But was Searle himself a programmer? Did he ever take a hard drive out of one laptop, pop it into another, and have the experience of the same familiar environment? Did he ever build an adder circuit, a simple register system, and a simple working computer out of logic gates, and see it suddenly come to life and execute programs? If he had, I can't help but think his intuitions regarding the syntax / semantic distinction would be different. EDIT: I mean, I'm personally a Christian, and do believe in the existence of eternal souls (though I'm not sure exactly what those look like). But I'm one of those annoying people who will quibble with an argument whose conclusion I agree with (or to which I am sympathetic), because I don't think it's actually a good argument. |
|
|