▲ | kelseyfrog 6 days ago | |
I'll have to reflect more on the first part, but as far as > It doesn't have to limit itself to coherent data. There are specific failure cases for non-integrability: 1. Dissociation/derealization = partial failures of gluing. 2. Nausea = inconsistent overlaps (ie: large cocycles) interpreted as bodily threat. 3. Anesthesia = disabling of the sheaf functor: no global section possible. At least for me it provides a consistent working model for hallucinogenic, synesthesia, phantom limb phenomena, and split-brain scenarios. If anything, the ways in which sensor integration fails are more interesting than when it succeeds. | ||
▲ | ants_everywhere 6 days ago | parent [-] | |
Yeah to be clear I like this mental model a lot, and I give it extra points for invoking sheaf theory :). I was just saying it doesn't seem complete to me from a psychological perspective. The way I look at it is that the sensors provide data as activations and awareness is some output with a thresholding or activation function. Sense making and consciousness in my mental model is something that happens after the fact and it tries to happen even with nonsense data. As opposed to -- as I was reading you to be leaning toward -- being the consequence of sensory data being in a sufficiently nice relationship with each other. |