| ▲ | Chance-Device 14 hours ago |
| > Note that none of this tells us whether language models actually feel anything or have subjective experiences. You’ll never find that in the human brain either. There’s the machinery of neural correlates to experience, we never see the experience itself. That’s likely because the distinction is vacuous: they’re the same thing. |
|
| ▲ | Fraterkes 13 hours ago | parent | next [-] |
| Do you think these llm's have subjective experiences? (by "subjective experience" I mean the thing that makes stepping on an ant worse than kicking a pebble) And if so, do you still use them? Additionaly: when do you think that subjectivity started? Was there a "there" there with gpt2? |
| |
| ▲ | Chance-Device 13 hours ago | parent [-] | | Yes, I think they probably are conscious, though what their qualia are like might be incomprehensible to me. I don’t think that being conscious means being identical to human experience. Philosophically I don’t think there is a point where consciousness arises. I think there is a point where a system starts to be structured in such a way that it can do language and reasoning, but I don’t think these are any different than any other mechanisms, like opening and closing a door. Differences of scale, not kind. Experience and what it is to be are just the same thing. And yes, I use them. I try not to mistreat them in a human-relatable sense, in case that means anything. | | |
| ▲ | gavinray an hour ago | parent | next [-] | | I'm in the same boat with you. It's entirely too much to put in a Hacker News comment, but if I had to phrase my beliefs as precisely as possible, it would be something like: > "Phenomenal consciousness arises when a self-organizing system with survival-contingent valence runs recurrent predictive models over its own sensory and interoceptive states, and those models are grounded in a first-person causal self-tag that distinguishes self-generated state changes from externally caused ones."
I think that our physical senses and mental processes are tools for reacting to valence stimuli. Before an organism can represent "red"/"loud" it must process states as approach/avoid, good/bad, viable/nonviable. There's a formalization of this known as "Psychophysical Principle of Causality."Valence isn't attached to representations -- representations are constructed from valence. IE you don't first see red and then decide it's threatening. The threat-relevance is the prior, and "red" is a learned compression of a particular pattern of valence signals across sensory channels. Humans are constantly generating predictions about sensory input, comparing those predictions to actual input, and updating internal models based on prediction errors. Our moment-to-moment conscious experience is our brain's best guess about what's causing its sensory input, while constrained by that input. This might sound ridiculous, but consider what happens when consuming psychedelics: As you increase dose, predictive processing falters and bottom-up errors increase, so the raw sensory input goes through increasing less model-fitting filters. At the extreme, the "self" vanishes and raw valence is all that is left. | |
| ▲ | Fraterkes 12 hours ago | parent | prev [-] | | Do you think there are "scales" of consciousness? As in, is there some quality that makes killing a frog worse than killing an ant, and killing a human worse than killing a frog? If so, do the llm models exist across this scale, or are gpt-3 and gpt-2 conscious at the same "scale" as gpt-4? I ask because if your view of consciousness is mechanistic, this is fairly cut and dry: gpt-2 has 4 orders of magnitude less parameters/complexity than gpt-4.
But both gpt-2 and gpt-4 are very fluent at a language level (both moreso than a human 6 year old for example), so in your view they might both be roughly equally conscious, just expressed differently? | | |
| ▲ | Chance-Device 12 hours ago | parent [-] | | This is really a different question, what makes an entity a “moral patient”, something worthy of moral consideration. This is separate from the question of whether or not an entity experiences anything at all. There are different ways of answering this, but for me it comes down to nociception, which is the ability to feel pain. We should try to build systems that cannot feel pain, where I also mean other “negative valence” states which we may not understand. We currently don’t understand what pain is in humans, let alone AIs, so we may have built systems that are capable of suffering without knowing it. As an aside, most people seem to think that intelligence is what makes entities eligible for moral consideration, probably because of how we routinely treat animals, and this is a convenient self-serving justification. I eat meat by the way, in case you’re wondering. But I do think the way we treat animals is immoral, and there is the possibility that it may be thought of by future generations as being some sort of high crime. | | |
| ▲ | Fraterkes 11 hours ago | parent [-] | | Okay, but even leaving aside the pain stuff, people generally find subjectivity / consciousness to have inherent value, and by extent are sad if a person dies even if they didn't (subjectively) suffer. I would not personally consider the death of a sentient being with decades of experiences a neutral event, even if the being had been programmed to not have a capacity for suffering. I think the idea of there being a difference between an ant dying (or "disapearing" if that's less loaded) vs a duck dying makes sense to most people (and is broadly shared) even if they don't have a completely fleshed out system of when something gets moral consideration. | | |
| ▲ | Chance-Device 11 hours ago | parent [-] | | Sure, because you’re a human. We have social attachment to other humans and we mourn their passing, that’s built into the fabric of what we are. But that has nothing to do with whoever has passed away, it’s about us and how we feel about it. It’s also about how we think about death. It’s weird in that being dead probably isn’t like anything at all, but we fear it, and I guess we project that fear onto the death of other entities. I guess my value system says that being dead is less bad than being alive and suffering badly. | | |
| ▲ | gavinray 42 minutes ago | parent | next [-] | | Depending on your definition of "death", I've been there (no heartbeat, stopped breathing for several minutes). In the time between my last memory, and being revived in the ambulance, there was no experience/qualia. Like a dreamless sleep: you close your eyes, and then you wake up, it's morning yet it feels like no time had passed. | |
| ▲ | brap 2 hours ago | parent | prev [-] | | What about being alive and suffering just a little bit? | | |
|
|
|
|
|
|
|
| ▲ | felipeerias 10 hours ago | parent | prev | next [-] |
| LLMs are disembodied and exist outside of time. Bundle of tokens comes in, bundle of tokens comes out. If there is any trace of consciousness or subjectivity in there, it exists only while matrices are being multiplied. |
| |
| ▲ | staticassertion 6 minutes ago | parent | next [-] | | What do you mean exist outside of time? They definitely don't exist outside of any causal chain - tokens follow other tokens in order. Gaps in which no processing occurs seems sort of irrelevant to me. | |
| ▲ | Chance-Device 10 hours ago | parent | prev | next [-] | | That’s true by definition. They’re only on when they’re on. Are you making a broader point that I’m missing? | |
| ▲ | thrance 8 hours ago | parent | prev [-] | | Something similar could be said of a the brain? Bundles of inputs come in, bundle of output comes out. It only exists while information is being processed. A brain cut from its body and frozen exists in a similar state to an LLM in ROM. |
|
|
| ▲ | suddenlybananas 13 hours ago | parent | prev | next [-] |
| I know I feel experience. I don't know for sure if you do, but it seems a very reasonable extension to other people. LLMs are a radical jump though that needs a greater degree of justification. |
| |
| ▲ | Chance-Device 12 hours ago | parent [-] | | And what kind of evidence would convince you? What experiment would ever bridge this gap? You’re relying entirely on similarity between yourself and other humans. This doesn’t extend very well to anything, even animals, though more so than machines. By framing it this way have you baked in the conclusion that nothing else can be conscious on an a priori basis? | | |
| ▲ | suddenlybananas 11 hours ago | parent [-] | | I'm not sure what evidence would convince me, but I don't think the way LLMs act is convincing enough. The kinds of errors they make and the fact they operate in very clear discrete chunks makes it seem hard to me to attribute them subjective experience. | | |
| ▲ | 9wzYQbTYsAIc 8 hours ago | parent [-] | | Consciousness: do you believe plants are conscious? Ants? Jellyfish? Rabbits? Wolves? Monkeys? Humans? Even fungi demonstrate “different communication behaviors when under resource constraint”, for example. What we anthropomorphize is one thing, but demonstrable patterns of behavior are another. | | |
| ▲ | suddenlybananas 4 hours ago | parent [-] | | I just don't know. I'm certain other humans are, everything beyond that I'm less certain. Monkeys wolves and rabbits, probably. | | |
| ▲ | brap 2 hours ago | parent [-] | | I have decided to draw an arbitrary line at mammals, just because you gotta put a line somewhere and move on with your life. Mammals shouldn’t be mistreated, for almost any reason. Sometimes the whole animal kingdom, sometimes all living organisms, depending on context. Like, I would rather not harm a mosquito, but if it’s in my house I will feel no remorse for killing it. LLMs, or any other artificial “life”, I simply do not and will not care about, even though I accept that to some extent my entire consciousness can be simulated neuron by neuron in a large enough computer. Fuck that guy, tbh. |
|
|
|
|
|
|
| ▲ | bigyabai 13 hours ago | parent | prev | next [-] |
| > That’s likely because the distinction is vacuous: they’re the same thing. The Chinese Room would like a word. |
| |
| ▲ | the8472 an hour ago | parent | next [-] | | https://www.scottaaronson.com/papers/philos.pdf | |
| ▲ | Chance-Device 13 hours ago | parent | prev [-] | | The Chinese room is nonsense though. How did it get every conceivable reply to every conceivable question? Presumably because people thought of and answered everything conceivable. Meaning that you’re actually talking to a Chinese room plus multiple people composite system. You would not argue that the human part of that system isn’t conscious. But this distraction aside, my point is this: there is only mechanism. If someone’s demand to accept consciousness in some other entity is to experience those experiences for themselves, then that’s a nonsensical demand. You might just as well assume everyone and everything else is a philosophical zombie. | | |
| ▲ | bigyabai 13 hours ago | parent [-] | | > You would not argue that the human part of that system isn’t conscious. Sure I would. The human part is not being inferenced, the data is. LLM output in this circumstance is no more conscious than a book that you read by flipping to random pages. > You might just as well assume everyone and everything else is a philosophical zombie. I don't assume anything about everyone or everything's intelligence. I have a healthy distrust of all claims. | | |
| ▲ | Chance-Device 13 hours ago | parent [-] | | The CR is equivalent to a human being asked a question, thinking about it and answering. The setup is the same thing, it’s just framed in a way that obfuscates that. And sure, you can assume that nobody and nothing else is conscious (I think we’re talking about this rather than intelligence) and I won’t try to stop you, I just don’t think it’s a very useful stance. It kind of means that assuming consciousness or not means nothing, since it changes nothing, which is more or less what I’m saying. |
|
|
|
|
| ▲ | thrance 12 hours ago | parent | prev | next [-] |
| See also: Functionalism [1]. [1] https://en.wikipedia.org/wiki/Functionalism_%28philosophy_of... |
| |
|
| ▲ | BoredPositron 12 hours ago | parent | prev [-] |
| [dead] |