▲ | Fraterkes 3 days ago | |
Kinda off-topic but for a while I’ve had an idea for a photography app where you’d take a picture, and then you could select a color in the picture and adjust it till it matched the color you see in reality. You could do that for a few colors and then eventually just map all the colors in the picture to be much closer to the perceived colors without having to do coarser post-processing. Even if you got something very posterized like in the article I think it could at least be a great reference for a more traditional processing step afterwards. Always wonder why that doesn’t seem to exist yet. | ||
▲ | JKCalhoun 3 days ago | parent | next [-] | |
I wonder if this is not like including in the photo a Macbeth Chart [1] and then trying to color match your image so that the swatches on the Macbeth Chart look the same digitally as well as in real life. One bottleneck of course is that the display you are on, where you are viewing the image, is likely not to have a gamut rich enough to even display all the colors of the Macbeth chart. No amount of fiddling with knobs will get you a green as rich as reality if there is an intense green outside the display's capabilities. But of course you can try to get close. [1] https://en.wikipedia.org/wiki/Color_chart (I seem to recall, BTW, that these Greytag-Macbeth color charts are so consistent because they are representing each color chemically. I mean, I suppose all dyes are chemical, but I understood that there was little to no mixing of pigments to get the Macbeth colors. I could be wrong about that though. My first thought when I heard it was of sulfur: for example, how pure sulfur, in one of its states, must be the same color every time. Make a sulfur swatch and you should be able to constantly reproduce it.) | ||
▲ | latexr 3 days ago | parent | prev | next [-] | |
Sounds like a lot of work for something which wouldn’t produce that good of a result. If you ever tried to take a colour from a picture with the eyedropper tool, you quickly realise what you see as one colour is in fact disparate number of pixels and it can be quite hard to get the exact thing you want. So right there you find the initial hurdle of finding and mapping the colour to change. Finding the edges would also be a problem. Not to mention every screen is different, so whatever changes you’re doing, even if they looked right to you in the moment, would be useless when you sent your image to your computer for further processing. Oh, and our eyes can perceive it differently too. So now you’re doing a ton of work to badly change the colours of an image so they look maybe a bit closer to reality for a single person on a single device. | ||
▲ | CharlesW 3 days ago | parent | prev | next [-] | |
So this would be a subjective alternative to matching to color cards? What would the benefit be over a precise/objective match? | ||
▲ | bux93 3 days ago | parent | prev [-] | |
This is essentially what you do as step 1 when color correcting in Davinci Resolve, but only for white (or, anything that's grayscale). Select a spot that's white/gray, click on the white balance picker, and the white balance is set. It's not perfect of course, but gets a surprisingly good result for close to zero effort. |