Remix.run Logo
kllrnohj a day ago

> You’re using different words to say the exactly same thing I was trying to say. You’re not arguing with me, you’re agreeing. Your “arbitrary brightness” means the same thing I meant by “max brightness”, because I meant max brightness as the maximum brightness the device will display at its current arbitrary settings, not the absolute maximum brightness the device is capable of if you change the settings later.

No, it's not! HDR exceeds that brightness, it's not what the display's maximum currently is.

> SDR, like PQ, is a video standard and has an absolute reference point (100 nits),

SDR isn't a standard at all, it's a catch-all to mean anything not-HDR. But no, it has no absolute reference point.

> To circle back to my main point, I still believe that using physical units is the most important part of HDR conceptually.

The only actual feature of HDR is the concept that display brightness is decoupled from user brightness, allowing content to "punch through" that ceiling. In camera terms that means the user is controlling middle grey, not the point at which clipping occurs.

dahart a day ago | parent [-]

> SDR isn’t a standard at all, it’s a catch-all to mean anything not-HDR.

Maybe you’re thinking of LDR. Low Dynamic Range is the catch-all opposite of HDR. SDR Standard Dynamic Range means Rec. 709 to most people? That’s how I’ve been using those 2 acronyms all along in this thread, in case you want to revisit.

https://en.wikipedia.org/wiki/Standard-dynamic-range_video

https://support.apple.com/guide/motion/about-color-space-mot...

> The only actual feature of HDR is the concept that display brightness is decoupled from user brightness, allowing content to “punch through” that ceiling.

There are lots of features of HDR, depending on what your goals are, and many definitions over the years. It’s fair to say that using absolute physical units instead of relative colors does decouple the user’s color from the display so maybe you’re agreeing.

Use of physical units was one of the important motivations for the invention of HDR. The first HDR file format I know of and used, created for the Radiance renderer in ~1985, was designed for lighting simulation and physical validation, for applications like aerospace, forensics, architecture, etc. https://en.wikipedia.org/wiki/Radiance_(software)#HDR_image_...

In CG film & video game production (e.g., the article we’re commenting on), it’s important that the pre-output HDR workflow is linear, has a maximum brightness significantly higher than any possible display device, and uses higher bit rate than final output, to allow for wide latitude in post-processing, compositing, re-exposure, and tone mapping.

In photography, where everyone could already always control middle grey, people use HDR because they care about avoiding hard clipping, they want to be able to control what happens to the sun and bright highlights, and because they want to be able to re-expose pictures that appear clipped in blacks or whites to reveal new detail.

> In camera terms that means the user is controlling middle grey, not the point at which clipping occurs.

I’m not sure what you mean, but that sounds like you’re referring to tonemapping specifically, or even just gamma, and not HDR generally. With an analog film camera, which I hope we can agree is not HDR imaging, I can control middle grey with my choice of exposure and with my choice of film and with lens filters (and a similar set of middle-grey controlling choices exists when printing). The same is true for a digital camera that captures only 8 bits/channel JPG files. Tonemapping certainly comes with a lot of HDR workflows, but does not define what HDR means nor is it necessary. You can do HDR imaging without any tonemapping, and without trying to control middle grey.

kllrnohj a day ago | parent [-]

Physical units to describe your scene before it hits your camera makes perfect sense. Physical units for the swapchain on the way to the display (which is where PQ enters the picture), however, does not make sense and is a bug with HDR10(+)/DolbyVision.

I think you've been focused entirely on the scene component, pre-camera/tone mapping, whereas I'm talking entirely after that. The part that's sent into the system or out to the display

dahart a day ago | parent [-]

Yes! You’re talking about the space between the application’s output and the display, and I’m talking about capture, storage, and processing before tone mapping. In many HDR workflows there are a lot of things in between camera and tone mapping, and that is where I see most of the value of HDR workflows. The original point of HDR was to capture physical units, and that means storage after the scene hits the camera, and before output. The original point of tone mapping was to squeeze the captured HDR images down to LDR output, to bake physical luminance down to relative colors. Before just a few years ago, tone mapping was the end of the HDR part of the pipeline, everything downstream was SDR/LDR. That’s getting muddy these days with “HDR” TVs that can do 3k nits, and all kinds of color spaces, but HDR conceptually and historically didn’t require any of that and still doesn’t. Plenty of workflows still exist where the input of tonemapping is HDR and the output is SDR/LDR, and there’s no swapchain or PQ or HDR TVs in the loop.