▲ | ants_everywhere 6 days ago | |||||||
> Pretty much the same conclusion here. Consciousness is what we feel when sheaf 1-cohomology among our different senses vanishes. There's something more to it than this. For one thing there's a threshold of awareness. Your mind is constantly doing things and having thoughts that don't arrive to the threshold of awareness. You can observe more of this stuff if you meditate and less of this stuff if you constantly distract yourself. But consciousness IMO should have the idea of a threshold baked in. For another, the brain will unify things that don't make sense. I assume you mean something like consciousness is what happens when there aren't obstructions to stitching sensory data together. But the brain does a lot of work interpreting incoherent data as best it can. It doesn't have to limit itself to coherent data. | ||||||||
▲ | kelseyfrog 6 days ago | parent [-] | |||||||
I'll have to reflect more on the first part, but as far as > It doesn't have to limit itself to coherent data. There are specific failure cases for non-integrability: 1. Dissociation/derealization = partial failures of gluing. 2. Nausea = inconsistent overlaps (ie: large cocycles) interpreted as bodily threat. 3. Anesthesia = disabling of the sheaf functor: no global section possible. At least for me it provides a consistent working model for hallucinogenic, synesthesia, phantom limb phenomena, and split-brain scenarios. If anything, the ways in which sensor integration fails are more interesting than when it succeeds. | ||||||||
|