▲ | uncircle 2 days ago | ||||||||||||||||||||||
I found this video to visualise what tone mapping is trying to achieve, and why "photorealism" is hard to achieve in computer graphics: https://www.youtube.com/watch?v=m9AT7H4GGrA And I indirectly taught me how to use the exposure feature in my iPhone camera (when you tap a point in the picture). It's so that you choose the "middle gray" point of the picture for the tone mapping process, using your eyes which have a much greater dynamic range than a CCD sensor. TIL. | |||||||||||||||||||||||
▲ | amarshall a day ago | parent | next [-] | ||||||||||||||||||||||
> the exposure feature in my iPhone camera…choose the "middle gray" point of the picture for the tone mapping process No, it uses that to set the physical exposure via the shutter speed and ISO (iPhones have a fixed aperture, so that cannot be changed). It literally says this in the video you linked. This is not tone mapping. Tone mapping in a way may also happen afterwards to convert from the wider dynamic range of the sensor if the output format has a more limited dynamic range. | |||||||||||||||||||||||
▲ | akomtu a day ago | parent | prev [-] | ||||||||||||||||||||||
I've heard a good point that our eyes have, in fact, a boring 1:100 range of brightness. Eyes can rapidly adjust, but the real game changer is our ability to create an image in our video memory, which has an unlimited brightness range. Eyes give us maybe a 2d uint8 framebuffer, but our mind creates and updates a float32 3d buffer. This is why this experience cannot be reproduced on a screen. | |||||||||||||||||||||||
|