| ▲ | dahart 2 days ago | ||||||||||||||||
Still not seeing why you claimed color is definitely not something you can plug into a 3D model. We can, and do, use 3D color models, of course. And some of them are designed to try to be closer to perceptual in nature, such as the LAB space like @zeroq mentioned at the top of this sub-thread. No well known perceptual color space I know of, and no color space in Photoshop, accounts for context/surround/background, so I don’t understand your claim about Photoshop immediately after talking about the surround problem, but FWIW everyone here knows that RGB is not a perceptual color space and doesn’t have a specification or standard, and everyone here knows that color spaces don’t solve all perceptual problems. I find it confusing to claim that cone response isn’t color yet, that’s going to get you in trouble in serious color discussions. Maybe better to just be careful and qualify that you’re talking about perception than say something that is highly contestable? The claim that a color model must model perception is also inaccurate. Whether to incorporate human perception is a choice that is a goal in some models. Having perceptual goals is absolutely not a requirement to designing a practical color model, that depends entirely on the design goals. It’s perfectly valid to have physical color models with no perceptual elements. | |||||||||||||||||
| ▲ | subb a day ago | parent [-] | ||||||||||||||||
The problem is that we mix up physical and perception, including in our language. If you look at the physical stuff, there's nothing in this specific range of EM radiation that is different from UV or IR light (or further). The physical stuff is not unique, our reading is. Therefore, color is not a physical thing. And so when I say "color" I only mean it to be the construction that we make out of the physical thing. We project back these construction outside of us (e.g. the apple is red), but we must no fool ourselves that the projection is the thing, especially when we try to be more precise about what is happening. This is why I'm saying a 3D model of color (brain thing) is very far from modelling color (brain thing) at all. But! It's not purely physical either, otherwise it would just be a spectral band or something. So this is pseudo-perceptual. It's the physical stuff, tailored for the very first bits of anatomy that we have to read this physical stuff. It's stimuli encoding. If you build a color model, it's therefore always perceptual, and needs to be evaluated against what you are trying to model - perception. You create a model to predict things. RGB and all the other models based on three values in a vaccum will always fail at predicting color (brain!) when the stimuli's surround is more complex. | |||||||||||||||||
| |||||||||||||||||