| ▲ | bigyabai 13 hours ago | |
> You would not argue that the human part of that system isn’t conscious. Sure I would. The human part is not being inferenced, the data is. LLM output in this circumstance is no more conscious than a book that you read by flipping to random pages. > You might just as well assume everyone and everything else is a philosophical zombie. I don't assume anything about everyone or everything's intelligence. I have a healthy distrust of all claims. | ||
| ▲ | Chance-Device 13 hours ago | parent [-] | |
The CR is equivalent to a human being asked a question, thinking about it and answering. The setup is the same thing, it’s just framed in a way that obfuscates that. And sure, you can assume that nobody and nothing else is conscious (I think we’re talking about this rather than intelligence) and I won’t try to stop you, I just don’t think it’s a very useful stance. It kind of means that assuming consciousness or not means nothing, since it changes nothing, which is more or less what I’m saying. | ||