| ▲ | A_D_E_P_T 15 hours ago | |||||||
I'd argue that it's not that complicated. That if something meets the below five criteria, we must accept that it is conscious: (1) It maintains a persisting internal model of an environment, updated from ongoing input. (2) It maintains a persisting internal model of its own body or vehicle as bounded and situated in that environment. (3) It possesses a memory that binds past and present into a single temporally extended self-model. (4) It uses these models with self-derived agency to generate and evaluate counterfactuals: Predictions of alternative futures under alternative actions. (i.e. a general predictive function.) (5) It has control channels through which those evaluations shape its future trajectories in ways that are not trivially reducible to a fixed reflex table. This would also indicate that Boltzmann Brains are not conscious -- so it's no surprise that we're not Boltzmann Brains, which would otherwise be very surprising -- and that P-Zombies are impossible by definition. I've been working on a book about this for the past three years... | ||||||||
| ▲ | jsenn 14 hours ago | parent | next [-] | |||||||
If you remove the terms "self", "agency", and "trivially reducible", it seems to me that a classical robot/game AI planning algorithm, which no one thinks is conscious, matches these criteria. How do you define these terms without begging the question? | ||||||||
| ||||||||
| ▲ | turtleyacht 3 hours ago | parent | prev | next [-] | |||||||
Is there a working title or some way to follow for updates? | ||||||||
| ▲ | dllthomas 14 hours ago | parent | prev | next [-] | |||||||
> so it's no surprise that we're not Boltzmann Brains I think I agree you've excluded them from the definition, but I don't see why that has an impact on likelihood. | ||||||||
| ▲ | squibonpig 11 hours ago | parent | prev [-] | |||||||
I don't think any of these need to lead to qualia for any obvious reason. It could be a p-zombie why not. | ||||||||
| ||||||||