| That is a good way to think about it. At some point, this becomes partly a matter of philosophical belief. But I am somewhat skeptical of the idea that everything can be reduced in that way. In order to build theories, we often reduce too much. When we build mental models of complex systems, especially when we try to treat them as closed systems, we always have to accept some degree of information loss. So I do partially agree with your point. A mechanistic explanation alone does not prove the absence of consciousness. Human intelligence can also be described in mechanistic terms. But I worry that this framing simplifies too much. It may reduce a complex phenomenon into a model that is useful in some ways, but incomplete in others. |
| |
| ▲ | dijksterhuis 3 hours ago | parent | next [-] | | this whole consciousness thing is fairly easy to put to bed if you run with the ideas from things like buddhism that everything is consciousness. then none of us have to bother with silly, distracting arguments about something that ultimately does not matter. is it helpful or harmful? am i being helpful or harmful when i interact with it? am i interacting with it in a helpful or harmful way? i’d rather people focussed on that rather than framing the debate around whether something has some ineffable property that we struggle to quantify for ourselves, yet again. quick edit — treat everything like it’s conscious, and don’t be a dick to it or while using it. problem solved. | | |
| ▲ | jdw64 3 hours ago | parent | next [-] | | hmm.... That also seems like a reasonable framing.
But the original article is, first of all, arguing that we should de-anthropomorphize AI. My point is only that, from the perspective of human cognition, anthropomorphizing can sometimes be useful. In practice, though, I think I am mostly on the same side as you.
To be honest, I have not thought about this topic very deeply. If we debated it further, I would probably only echo other people’s opinions. As you know, when something complex is compressed into a mental model, some information is always lost. In this case, the compression may be too large to be very useful.
I have not spent enough time thinking about this issue on my own. I also have not really imitated different positions, compared them, and tested them against each other. So my current thoughts on this topic are probably not very high-resolution.
In that sense, I may agree with you, but it would not really be an answer in the form that my own self recognizes as mine. It would mostly be an echo of other people’s opinions. | | |
| ▲ | altruios an hour ago | parent [-] | | Anthropomorphizing is giving it 'human' qualities. Intelligence and consciousness are not solely human qualities. Treating things with kindness and respect does not require anthropomorphizing. LLM's DO NOT THINK LIKE HUMANS (if they 'think' at all): and treating them like they think exactly like us is probably going to lead bad places. I treat them like an alien mind. Probably thinking, but in an alien way that's hard to recognize (as proven by these discussions) as 'thinking' (and also... if experiencing: through a metaphorical optophone). |
| |
| ▲ | goatlover 3 hours ago | parent | prev [-] | | I don't think that really helps. If you believe rocks are conscious, then does extracting minerals resources cause them pain? Do plants suffer when we pick their fruits and eat them? I don't see any behavioral or physical reason to think those things have conscious states. As for what consciousness is, it's pretty simple. You're sensations of color, sound, etc in perception, dreams, imagination, etc. The reason to dismiss LLMs as being conscious is those sensations depend on having bodies. You can prompt an AI to act like it's hungry, but there's really no meaning to it having a hungry experience as it has no digestive system. | | |
| ▲ | Jtarii 2 hours ago | parent | next [-] | | >As for what consciousness is, it's pretty simple. 2000+ years of philosophical thought would disagree. I don't believe biological stuff has a magic property that embues some intangible "consciousness" property. It makes more sense to me that consciousness is just a fundamental property of all matter. | | |
| ▲ | altruios an hour ago | parent [-] | | > consciousness is just a fundamental property of all matter
... Does that really make more sense than as an emergent property of the arrangement of matter? | | |
| ▲ | Jtarii 24 minutes ago | parent [-] | | Consciousness is something you can perceive, so it must have some physical presense in the universe, which must be through some fundamental property of matter, in my opinion. The ability to be aware of consciousness itself as some process that is happening elevates it above a mere emergent property to me. | | |
| ▲ | altruios 3 minutes ago | parent [-] | | > The ability to be aware of consciousness itself as some process that is happening. But a process is not a physical presence... A wave is made of things, but is not those things, waves emerge: why not then every process? |
|
|
| |
| ▲ | dijksterhuis 2 hours ago | parent | prev [-] | | you’ve misunderstood. everything is consciousness. not everything has consciousness. very different |
|
| |
| ▲ | rusk 3 hours ago | parent | prev [-] | | Historically we have used intelligence as a way to distinguish man from animal and human from machine. We rely upon it to determine who has our best interests at heart vs who is trying to do us in. Obviously that all changes if we invent an intelligence (conscious or not) that shares the planet with us. Through this lens the term consciousness (through a few more leaps) becomes the question of “is it capable of love and if so does it love us” and if it doesn’t, then it is a malevolent alien intelligence. If it was capable of love, why would it love us? I make a point of being polite to LLM’s where not completely absurd, overly because I don’t want my clipped imperative style to leak into day to day, but also covertly, you just never know … |
|