| ▲ | api 7 hours ago |
| If AI as presently designed and operated is conscious, this ends up being an argument for panpsychism. As you say it’s static, fixed, deterministic, and so on, and if you know how it works it’s more like a lossy compression model of knowledge than a mind. Ultimately it’s a lot of math. So if it’s conscious, a rock is conscious. A rock can process information in the form of energy flowing through it. It’s a fixed model. It’s non-reflective. Etc. |
|
| ▲ | root_axis 6 hours ago | parent | next [-] |
| I agree, but I don't think determinism is a factor either way. Ultimately, if arbitrary computer programs can be conscious, then it stands to reason that many other arbitrarily complex systems in the universe should also be. What makes the argument facile is that the singular focus on LLMs reveals an indulgence in the human tendency to anthropomorphize, rather than a reasoned perspective meant to classify the types of things in the universe which should be conscious and why LLMs should fall into that category. |
|
| ▲ | digitaltrees 7 hours ago | parent | prev [-] |
| Why would current AI be an argument for panpsycism? I don’t understand the connection. AI is stochastic, not static and deterministic. As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems |
| |
| ▲ | applfanboysbgon 7 hours ago | parent | next [-] | | > AI is stochastic, not static and deterministic. LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software. | | |
| ▲ | digitaltrees an hour ago | parent [-] | | The same argument is made about the human neural network | | |
| ▲ | applfanboysbgon 8 minutes ago | parent [-] | | 1. That is not the claim you originally made. 2. Not provably so. 3. Even if it were so, it is self-evident that the human brain's programming is infinitely more complex than that of an LLM's. I am not, in principle, in opposition to the idea that a sufficiently advanced computer program would be indistinguishable from that of human consciousness. But it is evidence of psychosis to suggest that the trivially simple programs we've created today are even remotely close, when this field of software specifically skips anything that programming a real intelligence would look like and instead engages in superficial, statistic-based mimicry of intelligent output. |
|
| |
| ▲ | colechristensen 7 hours ago | parent | prev [-] | | I think it's the opposite argument IF current AI is conscious, so are trees, rocks, turbulent flows, etc. The argument being that LLMs are so simple that if you want to ascribe consciousness to them you have to do the same to a LOT of other stuff. | | |
| ▲ | digitaltrees an hour ago | parent [-] | | But I listed a specific difference: sensation and response. Trees have that. Rocks do not. |
|
|