Remix.run Logo
dahart 2 days ago

> The most common relative colorspaces have 1.0 as an arbitrary user brightness setting, not the max brightness.

You’re using different words to say the exactly same thing I was trying to say. You’re not arguing with me, you’re agreeing. Your “arbitrary brightness” means the same thing I meant by “max brightness”, because I meant max brightness as the maximum brightness the device will display at its current arbitrary settings, not the absolute maximum brightness the device is capable of if you change the settings later.

It would be better if you take a moment to think in terms of print rather than only video standards. You can’t change the brightness of paper. 1.0 always means reflect all the light you can reflect, and there is no brightness setting. Because print is reflective and not emissive, print always takes relative colors. The analogy extends pretty naturally to TVs at a given brightness setting. The most common relative colorspaces (by which I mean the old crappy non-perceptual ones like RGB, HSV, CMYK) are still relative, meaning the brightest colors they represent (what we’re calling 1.0) map to the brightest the display will do at that moment in time.

SDR, like PQ, is a video standard and has an absolute reference point (100 nits), so of course any relative color space can have a 1.0 value greater than SDR white, because most TVs these days can exceed 100 nits. Is that what you mean? I don’t see why that’s relevant to what 1.0 means in other color spaces.

You still haven’t really explained what’s wrong with PQ. I’ve never used it directly, but do you have any links or any explanation to support your claim that it’s “wrong”? Why is it wrong? What color spaces are doing it right?

If people use ACES shaders in GameMaker, as the article discussed, doesn’t that automatically mean that GameMaker is not using PQ? It doesn’t make any sense to have PQ after tonemapping. Maybe as a compression technique / format that the video card and the display use transparently without any interaction with the user or the application, but that’s not GameMaker, its the TV.

I still don’t understand why we’re talking about PQ. To circle back to my main point, I still believe that using physical units is the most important part of HDR conceptually. You seemed to disagree, but this entire discussion so far only seems to make that point even more clear and firm, and as far as I can tell you agree. IMO the canonical color space for HDR and best example would be linear float channels where 1.0 is defined as 1.0 nits (or substitute for another physical luminance unit). HDRI’s strengths and utility is in capture, storage, and intermediate workflows, and not output. As soon as you target a specific type of device, as soon as you tack on a transfer function, tonemapping, or lower bit rate, you’re limiting options and losing information.

kllrnohj a day ago | parent [-]

> You’re using different words to say the exactly same thing I was trying to say. You’re not arguing with me, you’re agreeing. Your “arbitrary brightness” means the same thing I meant by “max brightness”, because I meant max brightness as the maximum brightness the device will display at its current arbitrary settings, not the absolute maximum brightness the device is capable of if you change the settings later.

No, it's not! HDR exceeds that brightness, it's not what the display's maximum currently is.

> SDR, like PQ, is a video standard and has an absolute reference point (100 nits),

SDR isn't a standard at all, it's a catch-all to mean anything not-HDR. But no, it has no absolute reference point.

> To circle back to my main point, I still believe that using physical units is the most important part of HDR conceptually.

The only actual feature of HDR is the concept that display brightness is decoupled from user brightness, allowing content to "punch through" that ceiling. In camera terms that means the user is controlling middle grey, not the point at which clipping occurs.

dahart a day ago | parent [-]

> SDR isn’t a standard at all, it’s a catch-all to mean anything not-HDR.

Maybe you’re thinking of LDR. Low Dynamic Range is the catch-all opposite of HDR. SDR Standard Dynamic Range means Rec. 709 to most people? That’s how I’ve been using those 2 acronyms all along in this thread, in case you want to revisit.

https://en.wikipedia.org/wiki/Standard-dynamic-range_video

https://support.apple.com/guide/motion/about-color-space-mot...

> The only actual feature of HDR is the concept that display brightness is decoupled from user brightness, allowing content to “punch through” that ceiling.

There are lots of features of HDR, depending on what your goals are, and many definitions over the years. It’s fair to say that using absolute physical units instead of relative colors does decouple the user’s color from the display so maybe you’re agreeing.

Use of physical units was one of the important motivations for the invention of HDR. The first HDR file format I know of and used, created for the Radiance renderer in ~1985, was designed for lighting simulation and physical validation, for applications like aerospace, forensics, architecture, etc. https://en.wikipedia.org/wiki/Radiance_(software)#HDR_image_...

In CG film & video game production (e.g., the article we’re commenting on), it’s important that the pre-output HDR workflow is linear, has a maximum brightness significantly higher than any possible display device, and uses higher bit rate than final output, to allow for wide latitude in post-processing, compositing, re-exposure, and tone mapping.

In photography, where everyone could already always control middle grey, people use HDR because they care about avoiding hard clipping, they want to be able to control what happens to the sun and bright highlights, and because they want to be able to re-expose pictures that appear clipped in blacks or whites to reveal new detail.

> In camera terms that means the user is controlling middle grey, not the point at which clipping occurs.

I’m not sure what you mean, but that sounds like you’re referring to tonemapping specifically, or even just gamma, and not HDR generally. With an analog film camera, which I hope we can agree is not HDR imaging, I can control middle grey with my choice of exposure and with my choice of film and with lens filters (and a similar set of middle-grey controlling choices exists when printing). The same is true for a digital camera that captures only 8 bits/channel JPG files. Tonemapping certainly comes with a lot of HDR workflows, but does not define what HDR means nor is it necessary. You can do HDR imaging without any tonemapping, and without trying to control middle grey.

kllrnohj a day ago | parent [-]

Physical units to describe your scene before it hits your camera makes perfect sense. Physical units for the swapchain on the way to the display (which is where PQ enters the picture), however, does not make sense and is a bug with HDR10(+)/DolbyVision.

I think you've been focused entirely on the scene component, pre-camera/tone mapping, whereas I'm talking entirely after that. The part that's sent into the system or out to the display

dahart a day ago | parent [-]

Yes! You’re talking about the space between the application’s output and the display, and I’m talking about capture, storage, and processing before tone mapping. In many HDR workflows there are a lot of things in between camera and tone mapping, and that is where I see most of the value of HDR workflows. The original point of HDR was to capture physical units, and that means storage after the scene hits the camera, and before output. The original point of tone mapping was to squeeze the captured HDR images down to LDR output, to bake physical luminance down to relative colors. Before just a few years ago, tone mapping was the end of the HDR part of the pipeline, everything downstream was SDR/LDR. That’s getting muddy these days with “HDR” TVs that can do 3k nits, and all kinds of color spaces, but HDR conceptually and historically didn’t require any of that and still doesn’t. Plenty of workflows still exist where the input of tonemapping is HDR and the output is SDR/LDR, and there’s no swapchain or PQ or HDR TVs in the loop.