| ▲ | GMoromisato 8 hours ago |
| I think this is a circular argument. It defines a separation between computation and experience (between the abstraction and the "mapmaker") and then concludes that computation cannot be experience because they are in separate categories. There are really only two solutions to the Hard Problem of Consciousness: 1. Consciousness is an unknown physical something (force/particle/quantum whatever).
2. Consciousness is an illusion. It is the software telling itself something. [Some people would add "3. Consciousness is an emergent property of certain systems." But that just raises the question of what emerged? Is it a physical structure, like a tornado (also an emergent property) or an internal feedback loop (i.e., an illusion).] The problem with #1 is that it's hard to cross the chasm from non-conscious to conscious with a bucket of parts. How is it that atoms/electrons/photons suddenly start experiencing pain? What is it, in terms of atoms/forces, that's experiencing the pain? #2 makes more sense. Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth. |
|
| ▲ | brotchie 7 hours ago | parent | next [-] |
| Originally rejected the paper premise, but I get it now, certainly made me question my belief that consciousness binds to any arbitrary information processing that's of sufficient complexity. IIUC the author is saying that the human brain is running directly on "layer zero": chemical gradients / voltage changes, while AI computes on an abstraction one layer higher (binary bit flips over discretized dyanmics). In essence, our brains are running directly on the "continuous" physical dynamics of the universe, while AI is running on a discretization of this (we're essentially discretizing the physical dynamics and to create state changes of 0 -> 1, 1 -> 0). My currently belief is that consciousness is some kind of field or property of the universe (i.e. a universal consciousness field) that "binds" to whatever information processing happens in our wet ware. If you've done intense meditation / psychedelics, there's this moment when it becomes obvious that you are only "you" due to some kind of universal consciousness's binding to your memory and sensory inputs. The "consciousness arises from information processing," i.e. the consciousness field binds to certain information processing patterns, can still hold, and yet not apply to AI (at least in its current form): The binding properties may only apply to continuous processes running directly on the universe's dynamics, and NOT to simulations running on discretized dynamics. |
| |
| ▲ | tsimionescu 6 hours ago | parent | next [-] | | > while AI is running on a discretization of this (we're essentially discretizing the physical dynamics and to create state changes of 0 -> 1, 1 -> 0). But this is just a discretization we impose when we try to represent the system for ourselves. The reality is that the AI is a particular time-ordered relation between the continuous electric fields inside the CPU, GPU, and various other peripherals. We design the system such that we can call +5V "1" and 0V "0", but the actual physical circuits do their work regardless of this, and they will often be at 2V or 0.7V and everywhere in between. The physical circuit works (or doesn't) based exclusively on the laws of electricity, and so the answer of the LLM is a physical consequence of the prompt, just as a standing building is a physical consequence of the relationships between the atoms inside its blocks. The abstract description we chose to use to build this circuit or this building is irrelevant, it's just the map, not the territory. | | |
| ▲ | dwb 6 hours ago | parent | next [-] | | The computer and the program wouldn't exist without us, though. They only exist to be interpreted by us. The physical properties of the circuits outside of what we cajole them into doing are irrelevant, meaningless. The circuits only do their work regardless of particular interpretations; they wouldn't exist at all without people building them to be interpreted. | | |
| ▲ | tsimionescu 5 hours ago | parent [-] | | The physical computer could exist regardless of us. The program, if by that we mean "a human model of the computation happening in a physical computer" is just a description, yes. It would be extraordinarily unlikely, but physically conceivable, that a physical system that is organized exactly like a microcontroller running an automatic door program, together with a solar panel, a basic engine, and a light sensor, could form randomly out of, say, a meteorite falling in a desert. If that did happen, the system would produce the same "door motor runs when person is near sensor" effect as the systems we build for this. The physical circuit are doing what they are doing because of physics. They don't care why they happen to be organized the way they are - whether occurring by human design or through random chance. Edit: I can add another metaphor. Consider buildings: clearly, buildings are artificial objects, described by architectural diagrams, which are purely human constructs, and couldn't be built without them. And yet, there exist naturally occurring formations that have the same properties as simple buildings - and you can draw architectural diagrams of those naturally occurring formations; and, assuming your diagrams are accurate, you can predict using them if the formations will resist an earthquake or collapse. Physical computers are no different from artificial buildings here, and the logic diagrams and computer programs are no different from the architectural diagrams: they are methods that help us build what we want, but they are still discovered properties of the physical world, not idealized objects of our own making; the fact that naturally occurring computers are very unlikely to form doesn't change this fact. |
| |
| ▲ | brotchie 6 hours ago | parent | prev [-] | | This is a good counter argument to the paper, honestly. | | |
| ▲ | TimTheTinker 6 hours ago | parent [-] | | I think a better counter is the question "Is there a meaningful difference between binary discretization and Planck units? Aren't those discrete/indivisible as well?" | | |
| ▲ | tsimionescu 5 hours ago | parent [-] | | That's not really a good counter - Planck units are not a discretization. Space-time is continuous in all quantum models, two objects can very well be 6.75 Planck lengths away from each other. The math of QM or QFT actually doesn't work on a discretized spacetime, people have tried. |
|
|
| |
| ▲ | mrandish 6 hours ago | parent | prev | next [-] | | I thought your "layer zero" analogy was an interesting avenue to reason about but you lost me with: > My currently belief is that consciousness is some kind of field or property of the universe (i.e. a universal consciousness field) that "binds" to whatever information processing happens in our wet ware. First, because it requires a huge leap into fundamental and universal physical mechanics for which there is currently zero objective evidence. Second, it's based entirely on individual interpretation of internal subjective experience. While some others (but not all) report similar interpretations or intuitions during some induced altered states, I think the much simpler explanation is that the internal 'sense of self' we normally experience is only one property of our mental processes and the sense of unbinding you temporarily experienced was a muting or disconnection of that component while keeping the rest of your 'internal experience machine' running. In your layer analogy, our sense of self may be akin to an interpreter running as a meta-process downstream of our input parser. Thus what you subjectively experienced while that interpreter was disconnected can seem alien and even profound. Neuroscientists have traced where in the brain the subjective sense of self emerges, so it's plausible it's a trait which can be selectively suppressed. Additionally, it's been demonstrated experimentally that subjectively profound experiences of universal connectedness sometimes described as spiritual, religious or metaphysical can be induced in a variety of ways. | |
| ▲ | colordrops 7 hours ago | parent | prev [-] | | Is there a layer zero though? What does that even mean? It implies the universe is designed and built upon layers of abstraction. That's just in our heads though, not out there. The layered model is a human abstraction. | | |
| ▲ | brotchie 7 hours ago | parent [-] | | It's the difference between: a) Actually pouring a cup of water into a pond (layer zero), and
b) Running a fluid dynamics simulation of pouring a cup of water into a pond (some layer above layer zero).
| | |
| ▲ | colordrops 7 hours ago | parent [-] | | I understand the original framing which is what you are repeating. I'm saying the framing itself is an illusion. It's an arbitrary distinction and also implies fully understanding all the underlying processes that go into pouring a cup of water in a pond (we don't) and that running a fluid dynamics simulation is some trivial thing (it's not). | | |
| ▲ | brotchie 6 hours ago | parent | next [-] | | Are you saying that, in some abstract sense, that actually pouring the cup may be isomorphic to running a perfect simulation of pouring the cup? Genuinely curious about your statement that its an illusion / arbitrary distinction, to figure out if there's a gap in my thinking / reasoning. To me there's a clear distinction between the actual thing happening via physical dynamics vs. us (humans) having creating a discretized abstraction (binary computation) on top of that and running a process on that abstraction. Maybe there's some true computational universality where the universes dynamics are discrete (definitely plausible) and there's no distinction between how a processes dynamics unfold: i.e. consciousness binds to states and state transitions regardless of how they are instantiated. I did use to hold this view , but now I'm not so sure. | |
| ▲ | dwb 6 hours ago | parent | prev [-] | | It's not arbitrary because people are making exactly this distinction in order to argue that it's possible for computers to be conscious, which this paper argues against. So the distinction exists at least for the purposes of this argument. Whether it "really" exists of course depends on your perspective. |
|
|
|
|
|
| ▲ | abeppu 7 hours ago | parent | prev | next [-] |
| I think #2 risk being incoherent unless you define things very carefully. "Illusion" ordinarily means there's someone with a subjective experience which creates incorrect beliefs about the world. E.g. I drive on a highway in summer, I see reflections on the road, I momentarily believe there is standing water, but it's an illusion. What does it mean for the basis of subjective experience to be illusory? Who experiences the illusion? > Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth. But we don't think the circuit has an experience of being on or off. And we _do_ think there's a difference between nerve impulses we're unaware of (e.g. your enteric nervous system most of the time) and ones we are aware of (saying "ow"). Declaring it to be "not any more real" than the led case doesn't explain the difference between nervous system behavior which does or doesn't rise to the level of conscious awareness. |
| |
| ▲ | GMoromisato 6 hours ago | parent [-] | | Agreed! The difficulty with consciousness is that there is no observable effect to distinguish between, say, actual pain and simulation of pain (acting like you are in pain). And I don't think I have a good handle (much less a coherent definition) on what it means for consciousness to be an illusion. What I think it means is that the process that is getting signals about the environment, and making decisions about what to do, is getting a signal that it is in pain. The signal causes the process to alter its behavior, and one of its behaviors is that when it introspects, it notices that it is in pain. The introspection (how am I feeling) is just a data processing loop, but that process, which is responsible for tracking how its feeling, is in the pain state. There's a lot of hand waving here, which is why this is the Hard Problem of Consciousness and why this paper has not solved it. |
|
|
| ▲ | neosat 7 hours ago | parent | prev | next [-] |
| Agree with your points on the primary two questions and the circular argument in the original article.
However, re: " How is it that atoms/electrons/photons suddenly start experiencing pain? What is it, in terms of atoms/forces, that's experiencing the pain?" that's an interesting question but not necessarily fundamentally refuting of #1. If you start with #1 "Consciousness is an unknown physical something (force/particle/quantum whatever)" then it has 'perceivable' properties of it's own different from those of it's constituent atoms or electrons. A toy example is the 'wetness' of water. If you only look at atoms and molecules with no way to 'experience' water then it's hard to conceive how water can have properties (though in the case of water it is tractable) Consciousness *may* be something similar. If it is (e.g. the purest form of energy) then it is not inconceivable that it has some properties that not not tractable if we only look at more granular manifestations of it. |
| |
| ▲ | GMoromisato 6 hours ago | parent [-] | | Agreed! I'm skeptical of consciousness requiring some exotic new physics (a quantum phenomenon or a new form of energy or somesuch) but we can't prove that it doesn't. Honestly, if someday a scientist proves that consciousness is a fundamental force like gravity, I would say, "yup, that makes sense!" even if I don't think it's likely. |
|
|
| ▲ | Exoristos 7 hours ago | parent | prev | next [-] |
| 4. It is ἐνέργεια, direct spark, of the God. It can be described but not comprehended, imitated but not replicated. |
| |
| ▲ | Windchaser 6 hours ago | parent [-] | | to be fair, at one time "life" was also seen this way. "There's a magic sauce, an elan vital, that makes living organisms live". But in the end, it turned out to be biochemistry. I think, given our history, it makes sense to be skeptical of claims that suggest that the things we don't yet understand cannot be comprehended or replicated. |
|
|
| ▲ | exitb 7 hours ago | parent | prev | next [-] |
| > Consciousness is an illusion. It is the software telling itself something. An illusion is a misinterpretation, which implies an observer. Who’s the observer then? |
| |
|
| ▲ | tim333 5 hours ago | parent | prev | next [-] |
| >2 ... It is the software telling itself something. I think human/animal consciousness works something like that - the neurons produce a summary of the organisms situation - what it's seeing, where it is, how it's feeling etc. That the is an input to the thinking/acting parts of the brain eg. feeling hungry, in bedroom -> maybe walk to the fridge. I'm not sure illusion is the right word. Maybe something like situational summary? |
|
| ▲ | elliotec 7 hours ago | parent | prev | next [-] |
| #0 Is what William James described as consciousness not being a separate substance, but a set of relations within experience itself: > Consciousness connotes a kind of external relation, and does not denote a special stuff or way of being. The peculiarity of our experiences, that they not only are, but are known, which their 'conscious' quality is invoked to explain, is better explained by their relations — these relations themselves being experiences — to one another. |
|
| ▲ | vsri 7 hours ago | parent | prev | next [-] |
| I resonate with this. I think some folks will object to the word "illusion" and it's connotations but I think it is resolved with: 1. Consciousness is a material thing (that we haven't found yet) 2. Consciousness is not a material thing (and therefore we cannot "find" it, and thus cannot be "known") 2 is the weirder proposition of course. It asserts a category of things that can't be conceived, but of course it feels like we are talking about it because we are using words to contain it. But of course, the words have no direct referent. That's the illusion. |
| |
| ▲ | TimTheTinker 7 hours ago | parent [-] | | 2 is only weirder if you don't already accept non-material reality, i.e. the proposition There exist real things that are not themselves composed of matter and/or energy. That's crossing into metaphysics, which isn't usually a welcome topic here, but the fact remains that more than 80% of the current and prior world population believes/believed in a non-material reality. The persistence and stickiness of that belief throughout history ought to at least make us sit up and pay attention. Something's going on, and it's not a mere historic lack of scientific rigor, notwithstanding science's penchant for filling gaps people previously attributed to spiritual causes. That near-universal reflex to attribute things to spiritual causes in the first place is what's interesting - why do people not merely say the cause is "something physical we don't understand"? | | |
| ▲ | mcphage 7 hours ago | parent [-] | | Tiger got to hunt, Bird got to fly; Man got to sit and wonder, "Why, why, why?" Tiger got to sleep, Bird got to land; Man got to tell himself he understand. —Kurt Vonnegut |
|
|
|
| ▲ | 7 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | renticulous 7 hours ago | parent | prev | next [-] |
| With the emergence argument, I have the following retort. How can something emerge if it wasn't embedded or hidden within the system already? |
| |
| ▲ | GMoromisato 6 hours ago | parent | next [-] | | I think when people say "emergent" they mean that it happens because of a combination of parts forming something greater. For example, if you decompose an airplane into its pieces, you will discover than none of the pieces can fly from Boston to San Francisco by itself. Wings can't fly without engines, engines can't work without fuel, etc. etc. Maybe consciousness is a process that requires many different components or steps. No one component is conscious, but the running process is. | | |
| ▲ | renticulous 5 hours ago | parent [-] | | Would it be ok to say quantum fields are conscious in some sense? That a quality of consciousness cannot emerge if it isn't there already in the most fundamental aspects of the reality | | |
| ▲ | GMoromisato an hour ago | parent [-] | | I don't know. We get lost in definitions that way. What does "conscious" mean? What does it mean for consciousness to already be there? What do you mean by "fundamental aspect of reality"? [These are rhetorical questions--if you try to answer them, we will get lost in definitions.] For every other problem that science tackles, there are observable results. How long does the apple take to fall? What time will the sun rise on June 21st? We can make theories and see if the theories match reality. But with consciousness there are no observable results. I know that I'm conscious, but there is no way for me to observe your consciousness. And there is no way for me to prove that I'm conscious (as opposed to a philosophical zombie). |
|
| |
| ▲ | colordrops 7 hours ago | parent | prev [-] | | I don't know, why not? | | |
| ▲ | renticulous 6 hours ago | parent [-] | | If not, then it has been hoisted upon material systems from outside. Which is nothing but substance dualism argument. |
|
|
|
| ▲ | dsign 7 hours ago | parent | prev | next [-] |
| Hm. It only takes a life of study and a lot of pain to understand that #2 is the thing. But most of us get to experience the latter without experiencing the former, so for most people #1 is the preferred option. #1 leads to theism and offers an immediate balm. Unfortunately, it mostly excludes #2, and that leaves us in the merciless hands of God. |
|
| ▲ | polotics 7 hours ago | parent | prev | next [-] |
| there are many possible points eg. for example what happens if you rephrase your solution 2 by swapping the terms? |
|
| ▲ | 0xBA5ED 7 hours ago | parent | prev | next [-] |
| "It defines a separation between computation and experience" Does it? Or does it separate two forms of computation (or two forms of experience)? Isn't it just saying a GPU can't be a brain and a brain can't be a GPU? That the entirety of a thing's experience can't be replicated on a different substrate, only simulated. The substrate does fundamentally dictate the ultimate experience (or lack thereof) of the thing that computes within it. |
|
| ▲ | colordrops 7 hours ago | parent | prev | next [-] |
| What is a "real" thing and not an "illusion" if you go with #2? Is a car a real thing, or just a collection of atoms? Is an atom a real thing? Or a collection of processes? Is it not turtles all the way down? What is "real"? |
| |
| ▲ | 0xBA5ED 7 hours ago | parent [-] | | Well if you can't concede that anything is real, that sort of makes you crazy doesn't it? A tree is real. But the concept of a tree and the word "tree" and all the ideas you have about the tree and what tree means, is that real? No, because it doesn't change the nature of the tree. When you cease to exist, the tree will still be there. Can you be absolutely 100% sure of that? Also no. But if you believe that other people are conscious individuals like you are and that some of them die and the tree keeps going, you can concede that it is probably true that the tree exists separate from your idea of it. | | |
| ▲ | colordrops 4 hours ago | parent [-] | | No, I don't feel crazy. Just honest. I have no idea if the tree is still there when I cease to exist. I just go with that assumption out of convenience. This degrading of subjective experience as a minor detail rather than a fundamental aspect of reality is one of the core sources of confusion in western thought IMHO. | | |
| ▲ | 0xBA5ED 2 hours ago | parent [-] | | I'd argue we must go with these assumptions out of necessity rather than convenience. I don't have any broad strokes to offer on western thought, however. |
|
|
|
|
| ▲ | Delk 6 hours ago | parent | prev [-] |
| I think #2 is actually circular, or perhaps rather contradictory. In order to be able to have an illusion one would have to be conscious in the first place. Or how would you have an illusion of something if you're not aware enough to experience that illusion? So I don't think the concept of "illusion of consciousness" makes much sense. (It does make sense for others to have an illusion that an AI or some other entity is conscious, but not for the entity itself.) > Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth. Perhaps, but I think a physical presence is still required for consciousness, at least for any kind of consciousness that resembles ours. It's perhaps easier to talk about qualia rather than consciousness, but I think qualia are a prerequisite for consciousness anyway. Basically all of our qualia are somehow related to our needs in the physical world. We feel physical pain because it signals that our body is in danger of being damaged. We feel emotional pain from social rejection because for most of our history humans have needed other people for physical survival. (Or in some cases perhaps because our genes make us want to procreate and we failed at that.) Either way, our needs in the physical world are not being met. Evolution has produced genetic code that produces a brain that somehow makes us feel that subjectively, even if nobody knows how. Those subjective experiences of course get processed by neurons, assuming you accept materialism. (Neurons are AFAIK significantly more complex than the "neurons" in ANNs, so equating biological neuronal activity with ANNs is wrong. But I suppose in principle any physical process may be represented or at least approximated by some symbolic representation, so in theory that probably doesn't matter.) We can also express those subjective qualia in terms of language. However, I don't think it's possible to have our qualia (or consciousness) based on language or symbolic manipulation alone if it doesn't have some kind of a connection to our physical needs. If you could directly simulate an entire human brain and feed it artificial sensory input, I suppose it would actually be conscious without having a physical body. In principle an AI could also evolve consciousness based on survival needs even if it were not biological. But for example LLMs have been trained only on the symbolic level. Their "neural" structure is not simulating a brain and they don't have a connection to physical needs. I think that makes them incapable of consciousness even if the output they produce successfully mimics human language -- that is, symbolic representations of our qualia and conscious thought. I'm not sure if that's the point the author is making. But I think the distinction between the purely symbolic "map" and the "actual thing" sort of makes sense. |