Remix.run Logo
dahart a day ago

Yes! You’re talking about the space between the application’s output and the display, and I’m talking about capture, storage, and processing before tone mapping. In many HDR workflows there are a lot of things in between camera and tone mapping, and that is where I see most of the value of HDR workflows. The original point of HDR was to capture physical units, and that means storage after the scene hits the camera, and before output. The original point of tone mapping was to squeeze the captured HDR images down to LDR output, to bake physical luminance down to relative colors. Before just a few years ago, tone mapping was the end of the HDR part of the pipeline, everything downstream was SDR/LDR. That’s getting muddy these days with “HDR” TVs that can do 3k nits, and all kinds of color spaces, but HDR conceptually and historically didn’t require any of that and still doesn’t. Plenty of workflows still exist where the input of tonemapping is HDR and the output is SDR/LDR, and there’s no swapchain or PQ or HDR TVs in the loop.