Remix.run Logo
John Searle has died(nytimes.com)
118 points by sgustard 12 hours ago | 116 comments
toomuchtodo 11 hours ago | parent | next [-]

https://archive.today/41HwM

https://en.wikipedia.org/wiki/John_Searle

Zarathruster 6 hours ago | parent | prev | next [-]

Of all the things I studied at Berkeley, the Philosophy of Mind class he taught is the one I think back on most often. The subject matter has only grown in relevance with time.

In general, I think he's spectacularly misunderstood. For instance: he believed that it was entirely possible to create conscious artificial beings (at least in principle). So why do so many people misunderstand the Chinese Room argument to be saying the opposite? My theory is that most people encounter his ideas from secondary sources that subtly misrepresent his argument.

At the risk of following in their footsteps, I'll try to very succinctly summarize my understanding. He doesn't argue that consciousness can only emerge from biological neurons. His argument is much narrower: consciousness can't be instantiated purely in language. The Chinese Room argument might mislead people into thinking it's an epistemology claim ("knowing" the Chinese language) when it's really an ontology claim (consciousness and its objective, independent mode of existence).

If you think you disagree with him (as I once did), please consider the possibility that you've only been exposed to an ersatz characterization of his argument.

tsimionescu 4 hours ago | parent | next [-]

> His argument is much narrower: consciousness can't be instantiated purely in language.

No, his argument is that consciousness can't be instantiated purely in software, that it requires specialized hardware. Language is irrelevant, it was only an example. But his belief, which he articulates very explicitly in the article, is that you couldn't create a machine consciousness by running even a perfect simulation of a biological brain on a digital computer, neuron for neuron and synapse for synapse. He likens this simulation of a brain, which wouldn't think, to a simulation of a fire, which can't burn down a real building.

Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why. His ideas are very much muddy, and while he accuses others of supporting cartesian dualism when they think the brain and the mind can be separated, that you can "run" the mind on a different substrate, it is in fact obvious he held dualistic notions where there is something obviously special about the mind-brain interaction that is not purely computational.

mjburgess 3 hours ago | parent | next [-]

> this simulation of a brain, which wouldn't think, to a simulation of a fire, which can't burn down a real building

> with no clear reason whatsoever as to why

It's not clear to me how you can understand that fire has particular causal powers (to burn, and so on) that are not instantiated in a simulation of fire; and yet not understand the same for biological processes.

The world is a particular set of causal relationships. "Computational" descriptions do not have a causal semantics, so aren't about properties had in the world. The program itself has no causal semantics, it's about numbers.

A program which computes the fibonacci sequence describes equally-well the growth of a sunflower's seeds and the agglomeration of galactic matter in certain galaxies.

A "simulation" is, by definition, simply an accounting game by which a series of descriptive statements can be derived from some others -- which necessarily, lacks the causal relations of what is being described. A simulation of fire is, by definition, not on fire -- that is fire.

A simulation is a game to help us think about the world: the ability to derive some descriptive statements about a system without instantiating the properties of that system is a trivial thing, and it is always disappointing at how easily it fools our species. You can move beads of wood around and compute the temperature of the sun -- this means nothing.

bondarchuk 22 minutes ago | parent | next [-]

>A "simulation" is, by definition, simply an accounting game by which a series of descriptive statements can be derived from some others -- which necessarily, lacks the causal relations of what is being described.

This notion of causality is interesting. When a human claims that he is conscious, there a causal chain from the fact that they are conscious to their claiming so. When a neuron-level simulation of a human claims it is conscious, there must be a similar causal chain, with a similar fact at its origin.

tsimionescu 2 hours ago | parent | prev | next [-]

There is a massive difference between chemical processes, like fire, and computational processes, which thinking likely is. A computer can absolutely be made to interact with the world in a way that assigns real physical meaning to the symbols it manipulates, a meaning entirely independent of any conscious being. For example, the computer that powers an automatic door has a clear meaning for its symbols intrinsic in its construction.

Saying that the symbols in the computer don't mean anything, that it is only we who give them meaning, presupposes a notion of meaning as something that only human beings and some things similar to us possess. It is an entirely circular argument, similarly to the notion of p-zombies or the experience of seizing red thought experiment.

If indeed the brain is a biological computer, and if our mind, our thinking, is a computation carried out by this computer, with self-modeling abilities we call "qualia" and "consciousness", then none of these arguments hold. I fully admit that this is not at all an established fact, and we may still find out that our thinking is actually non-computational - though it is hard to imagine how that could be.

mjburgess an hour ago | parent [-]

There are no such things as "computational processes". Any computational description of reality describes vastly different sets of casual relata, nothing which exists in the real world is essentially a computational process -- everything is essential causal, with a circumstantially useful computational description.

mattclarkdotnet 3 hours ago | parent | prev [-]

Because simulated fire burns other things in the simulation just as much as “real” fire burns real things. Searle &co assert that there is a real world that has special properties, without providing any way to show that we are living in it

mjburgess 2 hours ago | parent [-]

> Because simulated fire burns other things in the simulation just as much as “real” fire burns real things.

What we mean by a simulation is, by definition, a certain kind of "inference game" we play (eg., with beads and chalk) that help us think about the world. By definition, if that simulation has substantial properties, it isn't a simulation.

If the claim is that an electrical device can implement the actual properties of biological intelligence, then the claim is not about a simulation. It's that by manufacturing some electrical system, plugging various devices into it, and so on -- that this physical object has non-simulated properties.

Searle, and most other scientific naturalists who appreciate the world is real -- are not ruling out that it could be possible to manufacture a device with the real properties of intelligence.

It's just that merely by, eg., implementing the fibonacci sequence, you havent done anything. A computation description doesnt imply any implementation properties.

Further, when one looks at the properties of these electronic systems and the kinds of causal realtions they have with their environments via their devices, one finds very many reasons to suppose that they do not implement the relevant properties.

Just as much as when one looks at a film strip under a microscope, one discovers that the picture on the screen was an illusion. Animals are very easily fooled, apes most of all -- living as we do in our own imaginations half the time.

Science begins when you suspend this fantasy way of relating to the world, look it its actual properties.

If your world view requires equivocating between fantasy and reality, then sure, anything goes. This is a high price to pay to cling on to the idea that the film is real, and there's a train racing towards you in your cinema seat.

dvt 2 hours ago | parent | prev | next [-]

> while he accuses others of supporting cartesian dualism when they think the brain and the mind can be separated, that you can "run" the mind on a different substrate

His views are perfectly consistent with non-dualism and if you think his views are muddy, that doesn't mean they are (they are definitively not muddy, per a large consensus). For the record, I am a substance dualist, and his arguments against dualism are pretty interesting, precisely because he argues that you can build something that functions in a different way than symbol manipulation while still doing something that looks like symbol manipulation (but also has this special property called consciousness, kind of like our brains).

Is this true? I don't know (I, of course, would argue "no"), but it does seem at least somewhat plausible and there's no obvious counter-argument.

tsimionescu 2 hours ago | parent [-]

I don't see how his views can be made sense of without dualism. He believed very much in this concept of qualia as some special property, and in the logical coherence of the concept of p-zombies, beings that would exactly like a conscious being but without having qualia. This simply makes no sense unless you believe that consciousness is a non-physical property, one that the physical world acts upon but which can't itself act back upon it (as otherwise, there would obviously have to be some kind of meaningful physical difference between the being that possesses it and the being that doesn't).

Zarathruster an hour ago | parent | prev | next [-]

> No, his argument is that consciousness can't be instantiated purely in software, that it requires specialized hardware. Language is irrelevant, it was only an example.

It's by no means irrelevant- the syntax vs. semantics distinction at the core of his argument makes little sense if we leave out language: https://plato.stanford.edu/entries/chinese-room/#SyntSema

Side note: while the Chinese Room put him on the map, he had as much to say about Philosophy of Language as he did of Mind. It was of more than passing interest to him.

> Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why.

I've never heard him say any such thing, nor read any word he's written attesting to this belief. If you have a source then by all means provide it.

I have, however, heard him say the following:

1. The structure and arrangement of neurons in the human nervous system creates consciousness.

2. The exact causal mechanism for this is phenomenon is unknown.

3. If we were to engineer a set of circumstances such that the causal mechanism for consciousness (whatever it may be) were present, we would have to conclude that the resulting entity- be it biological, mechanical, etc., is conscious.

He didn't have anything definitive to say about the causal mechanism of consciousness, and indeed he didn't see that as his job. That was to be an exercise left to the neuroscientists, or in his preferred terminology, "brain stabbers." He was confident only in his assertion that it couldn't be caused by mere symbol manipulation.

> it is in fact obvious he held dualistic notions where there is something obviously special about the mind-brain interaction that is not purely computational.

He believed that consciousness is an emergent state of the brain, much like an ice cube is just water in a state of frozenness. He explains why this isn't just warmed over property dualism: https://faculty.wcas.northwestern.edu/paller/dialogue/proper...

xtiansimon an hour ago | parent | prev | next [-]

>> “His argument is much narrower: consciousness can't be instantiated purely in language.”

> “No, his argument is that consciousness can't be instantiated purely in software…“

The confusion is very interesting to me, maybe because I’m a complete neophyte on the subject. That said, I’ve often wondered if consciousness is necessarily _embodied_ or emerged from pure presence into language & body. Maybe the confusion is intentional?

jll29 2 hours ago | parent | prev [-]

Hardware and software are of course equivalent, as every computer science (but not every philosopher) knows.

D.R. Hofstadter posited that we can extract/separate the software from the hardware it runs on (the program-brain dichotomy), whereas Searle believed that these were not two layers but consciousness was in effect a property of the hardware. And from that, as you say, follows that you may re-create the property if your replica hardware is close enough to the real brain.

IMHO, philosophers should be rated by the debate their ideas create, and by that, Searle was part of the top group.

gwd 3 hours ago | parent | prev | next [-]

> He doesn't argue that consciousness can only emerge from biological neurons. His argument is much narrower: consciousness can't be instantiated purely in language.

I haven't read loads of his work directly, but this quote from him would seem to contradict your claim:

> I demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentionality (Searle 1980). Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else. [1]

Unfortunately, it doesn't seem to me to have proven anything; it's merely made an accurate analogy for how a computer works. So, if "semantics" and "understanding" can live in <processor, program, state> tuples, then the Chinese Room as a system can have semantics and understanding, as can computers; and if "semantics" and "understanding" cannot live in <processor, program, state> tuples, then neither the Chinese Room nor computers can have understanding.

[1] https://plato.stanford.edu/entries/chinese-room/

Zarathruster 27 minutes ago | parent [-]

Sorry, I've reread this a few times and I'm not sure which part of Searle's argument you think I mischaracterized. Could you clarify? For emphasis:

> "consciousness can't be instantiated purely in language" (mine)

> "we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else" (Searle)

I get that the mapping isn't 1:1 but if you think the loss of precision is significant, I'd like to know where.

> Unfortunately, it doesn't seem to me to have proven anything; it's merely made an accurate analogy for how a computer works. So, if "semantics" and "understanding" can live in <processor, program, state> tuples, then the Chinese Room as a system can have semantics and understanding, as can computers; and if "semantics" and "understanding" cannot live in <processor, program, state> tuples, then neither the Chinese Room nor computers can have understanding.

There's a lot of debate on this point elsewhere in the thread, but Searle's response to this particular objection is here: https://plato.stanford.edu/entries/chinese-room/#SystRepl

dr_dshiv 5 hours ago | parent | prev [-]

This is true of many philosophers. Once you read the source materials, you realize the depth of the material.

rahimnathwani 8 hours ago | parent | prev | next [-]

I learned about Searle's death a few weeks ago, from this article: https://www.colinmcginn.net/john-searle/

It includes a letter that starts:

  I am Jennifer Hudin, John Searle’s secretary of 40 years.  I am writing to tell you that John died last week on the 17th of September.  The last two years of his life were hellish. HIs daughter–in-law, Andrea (Tom’s wife) took him to Tampa in 2024 and put him in a nursing home from which he never returned.  She emptied his house in Berkeley and put it on the rental market.  And no one was allowed to contact John, even to send him a birthday card on his birthday.
  
  It is for us, those who cared about John, deeply sad.
I'm surprised to see the NYT obituary published nearly a month after his death. I would have thought he'd be included in their stack of pre-written obituaries, meaning it could be updated and published within a day or two.
blast 8 hours ago | parent | next [-]

I found the delay puzzling too. But the NYT obit does link to https://www.colinmcginn.net/john-searle/ near the end.

masfuerte 30 minutes ago | parent [-]

The Times in the UK publishes obituaries of very well-known public figures within a day or two. Notable but lesser known people (such as Searle) await a quiet day and it can take as long as six months. Space is the constraint, not the availability of the obituary. I guess the NYT is the same.

pfortuny 6 hours ago | parent | prev [-]

Wow, what a terrible way to be treated. Thank you for the quote.

asah 4 hours ago | parent [-]

There's a lot more to this y'all aren't seeing. Difficult family situation you shouldn't judge.

stared 2 hours ago | parent | prev | next [-]

John Searle is one of those thinkers I disagree with, yet his ideas were fruitful — providing plenty of fuel for discussion. In particular, much of Daniel Dennett’s work begins with rebuttals of Searle’s claims, showing that they are inconsistent or meaningless. As in a story by Stanisław Lem — we all know there are no dragons, but it’s all about the beauty of the proofs.

The same goes for "What Is It Like to Be a Bat?" by Thomas Nagel — one of the most cited essays in the philosophy of mind. I had heard numerous references to it and finally expected to read an insightful masterpiece. Yet it turned out to be slightly tautological: that to experience, you need to be. Personally, I think the word be is a philosopher’s snake oil, or a "lockpick word" — it can be used anywhere, but remains fuzzy even in its intended use; vide E-Prime, an attempt to write English without "be": https://en.wikipedia.org/wiki/E-Prime.

Kim_Bruning 9 hours ago | parent | prev | next [-]

Oh, I've always wanted to debate him about the chinese room. I disagree with him, passionately. And that's the most fun debate to have. Especially when it's someone who is actually really skilled and knowledgeable and nuanced!

Maybe I should look up some of my other heroes and heretics while I have the chance. I mean, you don't need to cold e-mail them a challenge. Sometimes they're already known to be at events and such, after all!

siglesias 9 hours ago | parent | next [-]

Searle has written responses to dozens of replies to the Chinese Room. It's likely that you can find his rebuttals to your objection in the Stanford Encyclopedia of Philosophy's entry on the Chinese Room, or deeper in a source in the bibliography. Is your rebuttal listed here?

https://plato.stanford.edu/entries/chinese-room

danielbarla 7 hours ago | parent [-]

> In response to this, Searle argues that it makes no difference. He suggests a variation on the brain simulator scenario: suppose that in the room the man has a huge set of valves and water pipes, in the same arrangement as the neurons in a native Chinese speaker’s brain. The program now tells the man which valves to open in response to input. Searle claims that it is obvious that there would be no understanding of Chinese.

I mean, I guess all arguments eventually boil down to something which is "obvious" to one person to mean A, and "obvious" to me to mean B.

IanCal 4 hours ago | parent [-]

Same. I feel the Chinese room argument is a nice thing to clarify thinking.

Two systems, one feels intuitively like it understands, one doesn’t. But the two systems are functionally identical.

Therefore either my concept of “understanding” is broken, my intuition is wrong, or the concept as a whole is not useful at the edges.

I think it’s the last one. If a bunch of valves can’t understand but a bunch of chemicals and electrical signals can if it’s in someone’s head then I am simply applying “does it seem like biology” as part of the definition and can therefore ignore it entirely when considering machines or programs.

Searle seems to just go the other way and I don’t under Why.

lxgr 3 hours ago | parent | next [-]

Exactly. Refuting the premise of the Chinese Room is usually a sign of somebody not even willing to entertain the thought experiment. Refuting Searle's conclusion is where interesting philosophical discussions can be had.

Personally, I'd say that there is a Chinese speaking mind in the room (albeit implemented on a most unusual substrate).

strogonoff an hour ago | parent | prev [-]

There are two distinct counter-arguments to this way of debunking the Chinese room experiment, not in any specific order.

First, it is tempting to assume that a bunch of chemicals is the territory, that it somehow gives rise to consciousness, yet that claim is neither substantiated nor even scientific. It is a philosophical view called “monistic materialism” (or sometimes “naive materialism”), and perhaps the main reason this view is popular currently is that people uncritically adopt it following learning natural scientific fields, as if they made some sort of ground truth statements about the underlying reality.

The key to remember is that this is not a valid claim in the scope of natural sciences; this claim belongs to the larger philosophy (the branch often called metaphysics). It is not a useless claim, but within the framework of natural sciences it’s unfalsifiable and not even wrong. Logically, from scientific method’s standpoint, even if it was the other way around—something like in monistic idealism, where perception of time-space and material world is the interface to (map of) conscious landscape, which was the territory and the cause—you would have no way of proving or disproving this, just like you cannot prove or disprove the claim that consciousness arises from chemical processes. (E.g., if somebody incapacitates some part of you involved in cognition, and your feelings or ability to understand would change as a result, it’s pretty transparently an interaction between your mind and theirs, just with some extra steps, etc.)

The common alternatives to monistic materialism include Cartesian dualism (some of us know it from church) and monistic idealism (cf. Kant). The latter strikes me as the more elegant of the bunch, as it grants objective existence to the least amount of arbitrary entities compared to the other two.

It’s not to say that there’s one truly correct map, but just to warn against mistakenly trying to make a statement about objective truth, actual nature of reality, with scientific method as cover. Natural sciences do not make claims of truth or objective reality, they make experimentally falsifiable predictions and build flawed models that aid in creating more experimentally falsifiable predictions.

Second, what scientific method tries to build is a complete, formally correct and provable model of reality, there are some arguments that such model is impossible to create in principle. I.e., there will be some parts of the territory that are not covered by the map, and we might not know what those parts are, because this territory is not directly accessible to us: unlike a landmass we can explore in person, in this case all we have is maps, the perception of reality supplied by our mind, and said mind is, self-referentially, part of the very territory we are trying to model.

Therefore, it doesn’t strike me as a contradiction that a bunch of valves don’t understand yet we do. A bunch of valves, like an LLM, could mostly successfully mimic human responses, but the fact that this system mimics human responses is not an indication of it feeling and understanding like a human does, it’s simply evidence that it works as designed. There can be a very different territory that causes similar measurable human responses to arise in an actual human. That territory, unlike the valves, may not be fully measurable, and it can cause other effects that are not measurable (like feeling or understanding). Depending on the philosophical view you take, manipulating valves may not even be a viable way of achieving a system that understands; it has not been shown that biological equivalent of valves is what causes understanding, all we have shown is that those entities measurably change at the same time with some measurable behavior, which isn’t a causative relationship.

anvandare 8 hours ago | parent | prev [-]

All you have to do is train an LLM on the collected works and letters of John Searle; you could then pass your arguments along to the machine and out would come John Searle's thoughtful response...

ainiriand 8 hours ago | parent | next [-]

Something that would resemble 'John Searle's thoughtful response'...

watt 2 hours ago | parent [-]

I'll posit that the distinction does not matter: the whole Chinese Room line of discourse has been counterproductive to putting in actual work.

Uninen 2 hours ago | parent | prev | next [-]

You're absolutely right!

adastra22 4 hours ago | parent | prev [-]

I don't think John Searle would agree.

jfengel 11 hours ago | parent | prev | next [-]

Oh, bad timing. AI is currently in a remarkable state, where it passes the Turing test but is still not fully AGI. It's very close to the Chinese Room, which I had always dismissed as misleading. It's a great opportunity to investigate a former pure thought experiment. He'd have loved to see where it went.

somenameforme 7 hours ago | parent | next [-]

The Turing Test has not been meaningfully passed. Instead we redefined the test to make it passable. In Turing's original concept the competent investigator and participants were all actively expected to collude against the machine. The entire point is that even with collusion, the machine would be able to do the same, and to pass. Instead modern takes have paired incompetent investigators alongside participants colluding with the machine, probably in an effort to be part 'of something historic'.

In "both" (probably more, referencing the two most high profile - Eugene and the LLMs) successes, the interrogators consistently asked pointless questions that had no meaningful chance of providing compelling information - 'How's your day? Do you like psychology? etc' and the participants not only made no effort to make their humanity clear, but often were actively adversarial obviously intentionally answering illogically, inappropriately, or 'computery' to such simple questions. For instance here is dialog from a human in one of the tests:

----

[16:31:08] Judge: don't you thing the imitation game was more interesting before Turing got to it?

[16:32:03] Entity: I don't know. That was a long time ago.

[16:33:32] Judge: so you need to guess if I am male or female

[16:34:21] Entity: you have to be male or female

[16:34:34] Judge: or computer

----

And the tests are typically time constrained by woefully poor typing skills (is this the new normal in the smartphone gen?) to the point that you tend to get anywhere from 1-5 interactions of just several words each. The above snip was a complete interaction, so you get 2 responses from a human trying to trick the judge into deciding he's a computer. And obviously a judge determining that the above was probably a computer says absolutely nothing about the quality of responses from the computer - instead it's some weird anti-Turing Test where humans successfully act like a [bad] computer, ruining the entire point of the test.

The problem with any metric for something is that it often ends up being gamed to be beaten, and this is a perfect example of that. I suspect in a true run of the Turing Test we're still nowhere even remotely close to passing it.

anigbrowl 9 hours ago | parent | prev | next [-]

I'm generally against LLM recreations of dead people but AI John Searle could be pretty entertaining.

dr_dshiv 5 hours ago | parent | next [-]

Indeed, Necromancy is ethically fraught

bitwize 8 hours ago | parent | prev [-]

I'm reminded of how the AIs in Her created a replica of Alan Watts to help them wrestle with some major philosophical problems as they evolved.

lo_zamoyski 9 hours ago | parent | prev [-]

> AI is currently in a remarkable state, where it passes the Turing test but is still not fully AGI.

Appealing to the Turing test suggests a misunderstanding of Searle's arguments. It doesn't matter how well computational methods can simulate the appearance of intelligence. What matters is whether we are dealing with intelligence. Since semantics/intentionality is what is most essential to intelligence, and computation as defined by computer science is a purely abstract syntactic process, it follows that intelligence is not essentially computational.

> It's very close to the Chinese Room, which I had always dismissed as misleading.

Why is it misleading? And how would LLMs change anything? Nothing essential has changed. All LLMs introduce is scale.

Zarathruster 6 hours ago | parent | next [-]

I came to say this, thank you for sparing me the effort.

From my experience with him, he'd heard (and had a response to) nearly any objection you could imagine. He might've had fun playing with LLMs, but I doubt he'd have found them philosophically interesting in any way.

pwdisswordfishy 5 hours ago | parent | prev [-]

"At least they don't have true consciousness, but only a simulated one", I tell myself calmly as I watch the nanobots devour the entirety of human civilization.

kmoser 11 hours ago | parent | prev | next [-]

> Professor Searle concluded that psychological states could never be attributed to computer programs, and that it was wrong to compare the brain to hardware or the mind to software.

Gotta agree here. The brain is a chemical computer with a gazillion inputs that are stimulated in manifold ways by the world around it, and is constantly changing states while you are alive; a computer is a digital processor that works work with raw data, and tends to be entirely static when no processing is happening. The two are vastly different entities that are similar in only the most abstract ways.

levocardia 11 hours ago | parent | next [-]

Searle had an even stronger version of that belief, though: he believed that a full computational simulation of all of those gazillion inputs, being stimulated in all those manifold ways, would still not be conscious and not have a 'mind' in the human sense. The NYT obituary quotes him comparing a computer simulation of a building fire against the actual building going up in flames.

block_dagger 10 hours ago | parent [-]

When I read that analogy, I found it inept. Fire is a well defined physical process. Understanding / cognition is not necessarily physical and certainly not well defined.

sgt101 6 hours ago | parent | next [-]

>Understanding / cognition is not necessarily physical and certainly not well defined.

Whooha! If it's not physical what is it? How does something that's not physical interact with the universe and how does the universe interact with it? Where does the energy come from and go? Why would that process not be a physical process like any other?

lxgr 3 hours ago | parent | prev | next [-]

I'd say understanding and cognition are at this point fully explainable mechanistically. (I am very excited to live in a time where I was able to change my mind on this!)

Where we haven't made any headway on is on the connection between that and subjective experience/qualia. I feel like much of the (in my mind) strange conclusions of the Chinese Room are about that and not really about "pure" cognition.

netdevphoenix an hour ago | parent | prev | next [-]

Do you believe that there are things that are not physical? Extraordinary claims require extraordinary evidence. And no, "science can't explain x hence metaphysical" is not a valid response.

lo_zamoyski 9 hours ago | parent | prev | next [-]

That's debatable, but it is also irrelevant, as the key to the argument here is that computation is by definition an abstract and strictly syntactic construct - one that has no objective reality vis-a-vis the physical devices we use to simulate computation and call "computers" - while semantics or intentionality are essential to human intelligence. And no amount of syntax can somehow magically transmute into semantics.

vidarh 3 hours ago | parent [-]

This makes no sense. You could equally make the statement that thought is by definition an abstract and strictly syntactic construct - one that has no objective reality. Neither statement is supported by anything.

There's also no "magic" involved in transmuting syntax into semantics, merely a subjective observer applying semantics to it.

visarga 9 hours ago | parent | prev | next [-]

Simulated fire would burn down simulated building

measurablefunc 8 hours ago | parent [-]

If everything is simulated then "simulated(x)" is a vacuous predicate & tells you nothing so you might as well throw it away & speak directly in terms of the objects instead of wrapping/prepending everything w/ "simulated".

pwdisswordfishy 5 hours ago | parent [-]

"Simulated" is not a predicate, but a modality.

voidhorse 10 hours ago | parent | prev | next [-]

But that acknowledgement would itself lend Searle's argument credence because much of the brain = computer thesis depends on a fundamental premise that both brains and digital computers realize computation under the same physical constraints; the "physical substrate" doesn't matter (and that there is necessarily nothing special about biophysical systems beyond computational or resource complexity) (the same thinking by the way, leads to arguments that an abacus and a computer are essentially "the same"—really at root these are all fallacies of unwarranted/extremist abstraction/reductionism)

The history of the brain computer equation idea is fascinating and incredibly shaky. Basically a couple of cyberneticists posed a brain = computer analogy back in the 50s with wildly little justification and everyone just ran with it anyway and very few people (Searle is one of those few) have actually challenged it.

vidarh 3 hours ago | parent | next [-]

Unless you can show an example of how we can compute something that is not Turing computable, there is no justification for the inverse, as the inverse would require something in the brain to be capable of interactions that can not be simulated. And we've no evidence to suggest either that the brain can do something not Turing computable or of the presence of something in the brain that can't be simulated.

ozy 5 hours ago | parent | prev | next [-]

Maybe consciousness is exactly like simulated fire. It does a lot inside the simulation, but is nothing on the outside.

lo_zamoyski 9 hours ago | parent | prev | next [-]

> The history of the brain computer equation idea is fascinating and incredibly shaky. Basically a couple of cyberneticists posed a brain = computer analogy back in the 50s with wildly little justification and everyone just ran with it anyway and very few people (Searle is one of those few) have actually challenged it.

And something that often happens whenever some phenomenon falls under scientific investigation, like mechanical force or hydraulics or electricity or quantum mechanics or whatever.

jacquesm 7 hours ago | parent | prev [-]

Roger Penrose would be another.

freejazz 10 hours ago | parent | prev [-]

Isn't that besides the point? The point is that something would actually burn down.

wzdd 9 hours ago | parent | next [-]

GP's point is that buring something down is by definition something that requires a specific physical process. It's not obvious that thinking is the same. So when someone says something like "just as a simulation of fire isn't the same as an actual fire (in a very important way!), a simulation of thinking isn't the same as actual thinking" they're arguing circularly, having already accepted their conclusion that both acts necessarily require a specific physical process. Daniel Dennett called this sort of argument an "intuition pump", which relies on a misleading but intuitive analogy to get you to accept an otherwise-difficult-to-prove conclusion.

To be fair to Searle, I don't think he advanced this as an agument, but more of an illustration of his belief that thinking was indeed a physical process specific to brains.

measurablefunc 8 hours ago | parent [-]

He explains it in the original paper¹ & says in no uncertain terms that he believes the brain is a machine & minds are implementable on machines. What he is actually arguing is that substrate independent digital computation will never be a sufficient explanation for conscious experience. He says that brains are proof that consciousness is physical & mechanical but not digital. Searle is not against the computationalist hypothesis of minds, he admits that there is nothing special about minds in terms of physical processes but he doesn't reduce everything to substrate independent digital computation & conclude that minds are just software running on brains. There are a bunch of subtle distinctions that people miss when they try to refute Searle's argument.

¹https://home.csulb.edu/~cwallis/382/readings/482/searle.mind...

Zarathruster 5 hours ago | parent [-]

Quick definitional help for anyone who clicks on your link: the term "intentionality" in this context has a specialized meaning. In reference to mental states, it's the property of being about something, as in, "Alice is thinking about Bob." It doesn't necessarily have anything to do with intent, per se.

https://plato.stanford.edu/entries/consciousness-intentional...

anigbrowl 9 hours ago | parent | prev [-]

https://home.sandiego.edu/~baber/analytic/Lem1979.html

vidarh 3 hours ago | parent | prev | next [-]

Unless human brains exceeds the Turing computable, they're still computationally equivalent, and we have no indication exceeding the Turing computable is even possible.

lxgr 3 hours ago | parent | prev | next [-]

That's a quantitative distinction at most, since computationally both are equivalent (as both can simulate each other's basic components).

And what's a few orders of magnitudes in implementation efficiency among philosophers?

cannonpr 11 hours ago | parent | prev | next [-]

I think the statement above and yours both seem to ignore “Turing complete” systems, which would indicate that a computer is entirely capable of simulating the brain, perhaps not before the heat death of the universe, that’s yet to be proven and depends a lot on what the brain is really doing underneath in terms of crunching.

voidhorse 10 hours ago | parent [-]

This depends on the assumption that all brain activity is the process of realizing computable functions. I'm not really aware of any strong philosophical or neurological positions that has established this beyond dispute. Not to resurrect vitalism or something but we'd first need to establish that biological systems are reducible to strictly physical systems. Even so, I think there's some reason to think that the highly complex social historical process of human development might complicate things a bit more than just brute force "simulate enough neurons". Worse, whose brain exactly do you simulate? We are all different. How do we determine which minute differences in neural architecture matter?

lo_zamoyski 9 hours ago | parent [-]

> we'd first need to establish that biological systems are reducible to strictly physical systems.

Or even more fundamentally, that physics captures all physical phenomena, which it doesn't. The methods of physics intentionally ignore certain aspects of reality and focus on quantifiable and structural aspects while also drawing on layers of abstractions where it is easy to mistakenly attribute features of these abstractions to reality.

sgt101 5 hours ago | parent | next [-]

>also drawing on layers of abstractions where it is easy to mistakenly attribute features of these abstractions to reality.

Ok - I get that bit. I have always thought that physics is a description of the universe as observed and of course the description could be misleading in some way.

>the methods of physics intentionally ignore certain aspects of reality and focus on quantifiable and structural aspects

Can you share the aspects of reality that physics ignores? What parts of reality are unquantifiable and not structural?

Thiez an hour ago | parent | prev [-]

Not all of physics is relevant to a brain simulation. For example, humans appear equally conscious in free fall or in an accelerating vehicle, so a simulation can probably safely ignore the effects of gravity without affecting the outcome. We also know that at body temperature (so about 310K) there is a lot of noise, so we can rule out subtle quantum effects. There is also noise from head movement, pressure changes due to blood flow, slight changes in the chemicals present (homeostasis is not perfect). We won't be simulating at the level of individual molecules or lower.

To me it seems highly likely that our knowledge of physics is more than sufficient for simulating the brain, what is lacking is knowledge of biology and the computational power.

anigbrowl 9 hours ago | parent | prev | next [-]

a computer is a digital processor that works work with raw data, and tends to be entirely static when no processing is happening.

This depends entirely on how it's configured. Right now we've chosen to set up LLMs as verbally acute Skinner boxes, but there's not reason you can't set up a computer system to be processing input or doing self-maintenance (ie sleep) all the time.

p1esk 9 hours ago | parent | prev | next [-]

So you’re saying a brain is a computer, right?

kmoser 8 hours ago | parent [-]

In the sense that it can perform computations, yes. But the underlying mechanisms are vastly different from a modern digital computer, making them extremely different devices that are alike in only a vague sense.

sgt101 5 hours ago | parent [-]

I have always wondered if we would be capable of writing down the mechanisms that power our thoughts. I think that this was one of the ideas that bubbled up from reading Godel Escher Bach many years ago. Is it possible for us to express the machine that makes us using the outputs of that machine in the way that it's not possible to write second order logic using first order logic.

Of course, also there are processes that are not expressible as computations, but those of these that I know about seem very very distant from human thought, and it seems very very improbable that they could be implemented with a brain. I also think that these are not observed in our universe so far.

DaveZale 10 hours ago | parent | prev [-]

Yes. I took an introneuroscience course a few years ago. Even to understand what is happening in one neuron during one input from one dendrite requires differential equations. And there are postive and negative inputs and modulations... it is bewildering! And how many billions of neurons with hundreds of interactions with surrounding neurons? And bundles of them, many still unknown?

p1esk 9 hours ago | parent | next [-]

Do you need differential equations to understand what’s happening in a transistor?

throwaway78940 10 hours ago | parent | prev [-]

Searle was known for the Chinese Room experiment, whicb demonstrated language in its translational states to be strong enclitic feature of various judgements of the intermediary.

sgt101 5 hours ago | parent [-]

>translational states to be strong enclitic feature of various judgements of the intermediary

I don't understand, could you explain what you mean?

I looked up enclitic - it seems to mean the shortening of a word by emphasizing another word, I can't understand why this would apply to the judgements of an intermediary

dvt 2 hours ago | parent | prev | next [-]

As someone that studied philosophy, his work is cited often and is absolutely instrumental in modern theory of mind. His work has seen a resurgence recently due to the explosion of LLMs. I've read 2 or 3 of his books, and he was a brilliant mind with clear & concise arguments. I met many of his collaborators at UCLA, but sadly never the man himself. Either way, his work has had a profound effect on me and my understanding of the world.

Rest in peace.

tananan 3 hours ago | parent | prev | next [-]

What strikes me as interesting about the idea that there is a class of computations that, however implemented, would result in consciousness, is that is is in some way really idealistic.

There's no unique way to implement a computation, and there's no single way to interpret what computation is even happening in a given system. The notion of what some physical system is computing always requires an interpretation on part of the observer of said system.

You could implement a simulation of the human body on common x86-64 hardware, water pistons, or a fleet of spaceships exchanging sticky notes between colonies in different parts of the galaxy.

None of these scenarios physically resemble each other, yet a human can draw a functional equivalence by interpreting them in a particular way. If consciousness is a result of functional equivalence to some known conscious standard (i.e. alive human being), then there is nothing materially grounding it, other than the possibility of being interpreted in a particular way. Random events in nature, without any human intercession, could be construed as a veritable moment of understanding French or feeling heartbreak, on the basis of being able to draw an equivalence to a computation surmised from a conscious standard.

When I think along these lines, it easy to sympathize with the criticism of functionalism a la Chinese Room.

netdevphoenix an hour ago | parent | prev | next [-]

He brought so many unique contributions to the field. Top 10 in philosophy of mind imo. Sad that he chose to tarnish his legacy by preying on his students for decades. I find the lack of discussion in here around his misconduct very telling. There is so much to learn here regarding the way we revere bright minds like his that might not have the brightest of morals

ggm 11 hours ago | parent | prev | next [-]

> Informed once that the listing of an introductory philosophy course featured pictures of René Descartes, David Hume and himself, Professor Searle replied, “Who are those other two guys?” (the article)

ofrzeta 6 hours ago | parent | prev | next [-]

Consciousness in Artificial Intelligence | John Searle | Talks at Google (2015) https://www.youtube.com/watch?v=rHKwIYsPXLg

nextworddev 9 hours ago | parent | prev | next [-]

Obviously a meat brain is incomparable to a LLM - they are different types of intelligence. Any sane person wouldn't claim a LLM to be conscious in the meat brain sense, but it may be conscious in a LLM way, like the duration of time where matrix multiplications are firing inside GPUs.

zahlman 2 hours ago | parent | next [-]

If an LLM could be "conscious in an LLM way", then why not the same, mutatis mutandis, for an ordinary computer program?

nurettin 7 hours ago | parent | prev [-]

It just aligns generated words according to the input. It is missing individual agency and self sufficiency which is a hallmark of consciousness. We sometimes confuse the responses with actual thought because neural networks solved language so utterly and completely.

Zarathruster 5 hours ago | parent | next [-]

Not sure I'd use those criteria, nor have I heard them described as hallmarks of consciousness (though I'm open, if you'll elaborate). I think the existence of qualia, of a subjective inner life, would be both necessary and sufficient.

Most concisely: could we ask, "What is it like to be Claude?" If there's no "what it's like," then there's no consciousness.

Otherwise yeah, agreed on LLMs.

nurettin 5 hours ago | parent [-]

I'd say being the maintainer of the weights is individual agency. Not just training new agents, but introspection. So autonomous management system would be pretty much conscious.

cma 6 hours ago | parent | prev [-]

> It is missing individual agency and self sufficiency which is a hallmark of consciousness.

You can be completely paralyzed and completely concious.

tsimionescu 5 hours ago | parent | next [-]

Yes, but you can't be completely suspended with no sensory input or output, not even internally (i.e. hunger, inner pains, etc), and no desires, and still be conscious.

nurettin 6 hours ago | parent | prev [-]

Yes, and you have individual agency while completely paralyzed.

jrflowers 8 hours ago | parent | prev | next [-]

It is not very often that you hear about somebody raising the cost of rent for everyone in an entire city by ~28% in a single year[0]. He will certainly be remembered.

0. https://www.academia.edu/30805094/The_Success_and_Failure_of...

emil-lp 6 hours ago | parent [-]

Searle famously argued that the treatment of landlords in Berkeley was comparable to the treatment of black people in the south ...

internet_points 4 hours ago | parent | next [-]

oh wow https://en.wikipedia.org/wiki/John_Searle#Political_activity

jrflowers an hour ago | parent | prev [-]

I personally struggle to imagine what it would be like to have an untouchable philosophy professor that does not see the difference between purchasing a seventeen unit apartment building in Berkeley, California and being born black in the south. Sadly I was not there in the twenty five to twenty nine years between him making that argument and his departure from the university to experience that

netdevphoenix 18 minutes ago | parent [-]

"Departure" is a very generous euphemism for "being kicked for taking advantage of his students". His contributions to the philosophy of mind are great and will be felt in Computer Science, especially with the current LLM tech but let's not skirt around the subject.

cess11 4 hours ago | parent | prev | next [-]

Well, at least it's a good reason to re-read his infamous exchange with Derrida.

When I studied in Ulaan Bataar some twenty years ago I met a romanian professor of linguistics who had prepared by trying to learn mongolian from books. He quickly concluded that his knowledge of russian, cyrillic and having read his books didn't actually give him a leg up on the rest of us, and that pronounciation and rhythm as well as more subtle aspects of the language like humour and irony hadn't been appropriately transferred through the texts he'd read.

Rules might give you some grasp of a language, but breaking them with style and elegance without losing the audience is the sign of a true master and only possible by having a foundation in shared, embodied experience.

There's a crude joke in that Searle left academia disgraced the way he did.

mellosouls 10 hours ago | parent | prev | next [-]

Non-paywalled obit:

https://www.theguardian.com/world/2025/oct/05/john-searle-ob...

His most famous argument:

https://en.wikipedia.org/wiki/Chinese_room

tasty_freeze 9 hours ago | parent [-]

I find the Chinese room argument to be nearly toothless.

The human running around inside the room doing the translation work simply by looking up transformation rules in a huge rulebook may produce an accurate translation, but that human still doesn't know a lick of Chinese. Ergo (they claim) computers might simulate consciousness, but will never be conscious.

But is the Searle room, the human is the equivalent of, say, ATP in the human brain. ATP powers my brain while I'm speaking English, but ATP doesn't know how to speak English just like the human in the Searle room doesn't know how to speak Chinese.

slowmovintarget 9 hours ago | parent [-]

There is no translation going on in that thought experiment, though. There is text processing. That is, the man in the room receives Chinese text through a slot in the door. He uses a book of complex instructions that tells him what to do with that text, and he produces more Chinese text as a response according to those instructions.

Neither the man, nor the room "understand" Chinese. It is the same for the computer and its software. Jeffery Hinton has sad "but the system understands Chinese." I don't think that's a true statement, because at no point is the "system" dealing with semantic context of the input. It only operates algorithmically on the input, which is distinctly not what people do when they read something.

Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument.

tsimionescu 5 hours ago | parent | next [-]

> I don't think that's a true statement, because at no point is the "system" dealing with semantic context of the input. It only operates algorithmically on the input, which is distinctly not what people do when they read something.

There are two possibilities here. Either the Chinese room can produce the exact same output as some Chinese speaker would given a certain input, or it can't. If it can't, the whole thing is uninteresting, it simply means that the rules in the room are not sufficient and so the conclusion is trivial.

However, if it can produce the exact same output as some Chinese speaker, then I don't see by what non-spiritualistic criteria anyone could argue that it is fundamentally different from a Chinese speaker.

Edit: note that here when I'm saying that the room can respond with the same output as a human Chinese speaker, that includes the ability for the room to refuse to answer a question, to berate the asker, to start musing about an old story or other non-sequiturs, to beg for more time with the asker, to start asking the akser for information, to gossip about previous askers, and so on. Basically the full range of language interactions, not just some LLM style limited conversation. The only limitations in its responses would be related to the things it can't physically do - it couldn't talk about what it actually sees or hears, because it doesn't have eyes, or ears, it couldn't truthfully say it's hungry, etc. It would be limited to the output of a blind, deaf, mute Chinese speaker confined to a room whose skin is numb and who is being fed intravenously, etc.

randallsquared 9 hours ago | parent | prev | next [-]

> It only operates algorithmically on the input, which is distinctly not what people do when they read something.

That's not at all clear!

> Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument.

All of that is called into question with some LLM output. It's hard to understand how some of that could be produced without some emergency model of the world.

slowmovintarget 9 hours ago | parent [-]

In the thought experiment as constructed it is abundantly clear. It's the point.

LLM output doesn't call that into question at all. Token production through distance function in high-dimensional vector representation space of language tokens gets you a long way. It doesn't get you understanding.

I'll take Penrose's notions that consciousness is not computation any day.

randallsquared 26 minutes ago | parent | next [-]

I should have snipped the "it operates" part to communicate better. I meant that it's not at all clear that people are doing something non-algorithmic.

Cogito 8 hours ago | parent | prev [-]

Out of interest, what do you think it would look like if communicating was algorithmic?

I know that it doesn't feel like I am doing anything particularly algorithmic when I communicate but I am not the hommunculus inside me shuffling papers around so how would I know?

jacquesm 7 hours ago | parent [-]

I think it would end inspiration.

adastra22 4 hours ago | parent [-]

Inspiration is what a search algorithm feels like from the inside.

jacquesm 4 hours ago | parent [-]

Can you elaborate?

adastra22 2 hours ago | parent [-]

This goes far to explain a lot of Chinese room situations. We have an intuition for the way something is. That intuition is an unshakeable belief, because it is something that we feel directly. We know what it feels like to understand Chinese (or French, or English, or whatever), and that little homunculus shuffling papers around doesn't feel like it.

Hopefully we have all experienced what genuine inspiration feels like, and we all know that experience. It sure as hell doesn't feel like a massively parallel search algorithm. If anything it probably feels like a bolt of lightning, out of the blue. But here's the thing. If the conscious loop inside your brain is something like the prefrontal cortex, which integrates and controls deeper processing systems outside of conscious reach, then that is exactly what we should expect a search algorithm to feel like. You -- that strange conscious loop I am talking to -- are doing the mapping (framing the problem) and the reducing (recognizing the solution), but not the actual function application and lower level analysis that generated candidate solutions. It feels like something out of the blue, hardly sought for, which fits all the search requirements. Genuine inspiration.

But that's just what it feels like from the inside, to be that recognizing agent that is merely responding to data being fed up to it from the mess of neural connections we call the brain.

You can take this insight a step further, and recognize that many of the things that seem intuitively "obvious" are actually artifacts of how our thinking brains are constructed. The Chinese room and the above comment about inspiration are only examples.

I cannot emphasize enough how much I dislike linking to LessWrong, and to Yudkowsky in particular, but I first picked up on this from an article there, and credit should be given where credit is due: https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-alg...

jacquesm 17 minutes ago | parent [-]

Fascinating, thank you very much, and agreed on Yudkowsky. It's a bit like crediting Wolfram.

ozy 5 hours ago | parent | prev [-]

That is why you cannot ask the room for semantic changes. Like “if I call an umbrella a monkey, and it will rain today, what do I need to bring?”

Unless we suppose those books describe how to implement a memory of sorts, and how to reason, etc. But then how sure are we it’s not conscious?

adastra22 4 hours ago | parent [-]

> if I call an umbrella a monkey, and it will rain today, what do I need to bring?

I'm not even sure what you are asking for, tbh, so any answer is fine.

gennarro 10 hours ago | parent | prev | next [-]

If you are wondering, it’s not the Doc guy with a similar name: https://en.wikipedia.org/wiki/Doc_Searls (But he was a PhD)

bfkwlfkjf 8 hours ago | parent | prev [-]

> It also claims that Jennifer Hudin, the director of the John Searle Center for Social Ontology, where the complainant had been employed as an assistant to Searle, has stated that Searle "has had sexual relationships with his students and others in the past in exchange for academic, monetary or other benefits".

Wiki

sgustard 8 hours ago | parent [-]

But she also claims he "was innocent and falsely accused": https://www.colinmcginn.net/john-searle/

jrflowers 7 hours ago | parent [-]

She could feel that the 2016 allegations specifically were unfounded while acknowledging the previous pattern of misconduct.

https://www.insidehighered.com/quicktakes/2017/04/10/earlier...