Remix.run Logo
barishnamazov a day ago

I love posts that peel back the abstraction layer of "images." It really highlights that modern photography is just signal processing with better marketing.

A fun tangent on the "green cast" mentioned in the post: the reason the Bayer pattern is RGGB (50% green) isn't just about color balance, but spatial resolution. The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data. In many advanced demosaicing algorithms, the pipeline actually reconstructs the green channel first to get a high-resolution luminance map, and then interpolates the red/blue signals—which act more like "color difference" layers—on top of it. We can get away with this because the human visual system is much more forgiving of low-resolution color data than it is of low-resolution brightness data. It’s the same psycho-visual principle that justifies 4:2:0 chroma subsampling in video compression.

Also, for anyone interested in how deep the rabbit hole goes, looking at the source code for dcraw (or libraw) is a rite of passage. It’s impressive how many edge cases exist just to interpret the "raw" voltages from different sensor manufacturers.

shagie 21 hours ago | parent | next [-]

> A fun tangent on the "green cast" mentioned in the post: the reason the Bayer pattern is RGGB (50% green) isn't just about color balance, but spatial resolution. The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data.

From the classic file format "ppm" (portable pixel map) the ppm to pgm (portable grayscale map) man page:

https://linux.die.net/man/1/ppmtopgm

    The quantization formula ppmtopgm uses is g = .299 r + .587 g + .114 b.
You'll note the relatively high value of green there, making up nearly 60% of the luminosity of the resulting grayscale image.

I also love the quote in there...

   Quote

   Cold-hearted orb that rules the night
   Removes the colors from our sight
   Red is gray, and yellow white
   But we decide which is right
   And which is a quantization error.
(context for the original - https://www.youtube.com/watch?v=VNC54BKv3mc )
skrebbel 12 hours ago | parent | next [-]

> The quantization formula ppmtopgm uses is g = .299 r + .587 g + .114 b.

Seriously. We can trust linux man pages to use the same 1-letter variable name for 2 different things in a tiny formula, can't we?

boltzmann-brain 19 hours ago | parent | prev | next [-]

Funnily enough that's not the only mistake he made in that article. His final image is noticeably different from the camera's output image because he rescaled the values in the first step. That's why the dark areas look so crushed, eg around the firewood carrier on the lower left or around the cat, and similarly with highlights, e.g. the specular highlights on the ornaments.

After that, the next most important problem is the fact he operates in the wrong color space, where he's boosting raw RGB channels rather than luminance. That means that some objects appear much too saturated.

So his photo isn't "unprocessed", it's just incorrectly processed.

tpmoney 17 hours ago | parent | next [-]

I didn’t read the article as implying that the final image the author arrived at was “unprocessed”. The point seemed to be that the first image was “unprocessed” but that the “unprocessed” image isn’t useful as a “photo”. You only get a proper “picture” Of something after you do quite a bit of processing.

integralid 16 hours ago | parent [-]

Definitely what the author means:

>There’s nothing that happens when you adjust the contrast or white balance in editing software that the camera hasn’t done under the hood. The edited image isn’t “faker” then the original: they are different renditions of the same data.

viraptor 15 hours ago | parent [-]

That's not how I read it. As in, this is an incidental comment. But the unprocessed version is the raw values from the sensors visible in the first picture, the processed are both the camera photo and his attempt at the end.

eloisius 11 hours ago | parent | next [-]

This whole post read like and in-depth response to people that claim things like “I don’t do any processing to my photos” or feel some kind of purist shame about doing so. It’s a weird chip some amateur photographers have on their shoulders, but even pros “process” their photos and have done so all the way back until the beginning of photography.

Edman274 4 hours ago | parent [-]

Is it fair to recognize that there is a category difference between the processing that happens by default on every cell phone camera today, and the time and labor intensive processing performed by professionals in the time of film? What's happening today is like if you took your film to a developer and then the negatives came back with someone having airbrushed out the wrinkles and evened out skin tones. I think that photographers back in the day would have made a point of saying "hey, I didn't take my film to a lab where an artist goes in and changes stuff."

svara 11 hours ago | parent | prev [-]

But mapping raw values to screen pixel brightness already entails an implicit transform, so arguably there is no such thing as an unprocessed photo (that you can look at).

Conversely the output of standard transforms applied to a raw Bayer sensor output might reasonably be called the "unprocessed image", since that is what the intended output of the measurement device is.

Edman274 4 hours ago | parent [-]

Would you consider all food in existence to be "processed", because ultimately all food is chopped up by your teeth or broken down by your saliva and stomach acid? If some descriptor applies to every single member of a set, why use the descriptor at all? It carries no semantic value.

seba_dos1 8 hours ago | parent | prev [-]

You do need to rescale the values as the first step, but not exactly the described way (you need to subtract the data pedestal in order to get linear values).

akx 15 hours ago | parent | prev [-]

If someone's curious about those particular constants, they're the PAL Y' matrix coefficients: https://en.wikipedia.org/wiki/Y%E2%80%B2UV#SDTV_with_BT.470

delecti a day ago | parent | prev | next [-]

I have a related anecdote.

When I worked at Amazon on the Kindle Special Offers team (ads on your eink Kindle while it was sleeping), the first implementation of auto-generated ads was by someone who didn't know that properly converting RGB to grayscale was a smidge more complicated than just averaging the RGB channels. So for ~6 months in 2015ish, you may have seen a bunch of ads that looked pretty rough. I think I just needed to add a flag to the FFmpeg call to get it to convert RGB to luminance before mapping it to the 4-bit grayscale needed.

isoprophlex 15 hours ago | parent | next [-]

I wouldn't worry about it too much, looking at ads is always a shitty experience. Correctly grayscaled or not.

wolvoleo 8 hours ago | parent [-]

True, though in the case of the Kindle they're not really intrusive (only appearing when it's off) and the price to remove them is pretty reasonable ($10 to remove them forever IIRC).

As far as ads go that's not bad IMO)

marxisttemp 6 hours ago | parent [-]

The price of an ad-free original kindle experience was $409. The $10 is on top of the price the user paid for the device.

delecti 3 hours ago | parent [-]

Lets not distort the past. The ads were introduced a few years later with the Kindle Keyboard, which launched with an MSRP of $140 for the base model, or $115 with ads. That was a substantial discount on a product which was already cheap when it released.

All for ads which are only visible when you aren't using the device anyway. Don't like them? Then buy other devices, pay to have them removed, get a cover to hide them, or just store it with the screen facing down when you aren't using it.

barishnamazov a day ago | parent | prev [-]

I don't think Kindle ads were available in my region in 2015 because I don't remember seeing these back then, but you're a lucky one to fix this classic mistake :-)

I remember trying out some of the home-made methods while I was implementing a creative work section for a school assignment. It’s surprising how "flat" the basic average looks until you actually respect the coefficients (usually some flavor of 0.21R + 0.72G + 0.07B). I bet it's even more apparent in a 4-bit display.

kccqzy a day ago | parent | next [-]

I remember using some photo editing software (Aperture I think) that would allow you to customize the different coefficients and there were even presets that give different names to different coefficients. Ultimately you can pick any coefficients you want, and only your eyes can judge how nice they are.

acomjean 18 hours ago | parent [-]

>Ultimately you can pick any coefficients you want, and only your eyes can judge how nice they are.

I went to a photoshop conference. There was a session on converting color to black and white. Basically at the end the presenter said you try a bunch of ways and pick the one that looks best.

(people there were really looking for the “one true way”)

I shot a lot of black and white film in college for our paper. One of my obsolete skills was thinking how an image would look in black and white while shooting, though I never understood the people who could look at a scene and decide to use a red filter..

Grimm665 3 hours ago | parent | next [-]

> I shot a lot of black and white film in college for our paper. One of my obsolete skills was thinking how an image would look in black and white while shooting, though I never understood the people who could look at a scene and decide to use a red filter..

Dark skies and dramatic clouds!

https://i.ibb.co/0RQmbBhJ/05.jpg

(shot on Rollei Superpan with a red filter and developed at home)

jnovek 8 hours ago | parent | prev [-]

This is actually a real bother to me with digital — I can never get a digital photo to follow the same B&W sensitivity curve as I had with film so I can never digitally reproduce what I “saw” when I took the photo.

marssaxman 6 hours ago | parent [-]

Film still exists, and the hardware is cheap now!

I am shooting a lot of 120-format Ilford HP5+ these days. It's a different pace, a different way of thinking about the craft.

reactordev a day ago | parent | prev [-]

If you really want that old school NTSC look: 0.3R + 0.59G + 0.11B

This is the coefficients I use regularly.

JKCalhoun 7 hours ago | parent | next [-]

Yep, used in the early MacOS color picker as well when displaying greyscale from RGB values. The three weights (which of course add to 1.0) clearly show a preference for the green channel for luminosity (as was discussed in the article).

ycombiredd 21 hours ago | parent | prev [-]

Interesting that the "NTSC" look you describe is essentially rounded versions of the coefficients quoted in the comment mentioning ppm2pgm. I don't know the lineage of the values you used of course, but I found it interesting nonetheless. I imagine we'll never know, but it would be cool to be able to trace the path that lead to their formula, as well as the path to you arriving at yours

zinekeller 21 hours ago | parent | next [-]

The NTSC color coefficients are the grandfather of all luminance coefficients.

It is necessary that it was precisely defined because of the requirements of backwards-compatible color transmission (YIQ is the common abbreviation for the NTSC color space, I being ~reddish and Q being ~blueish), basically they treated B&W (technically monochrome) pictures like how B&W film and videotubes treated them: great in green, average in red, and poorly in blue.

A bit unrelated: pre-color transition, the makeups used are actually slightly greenish too (which appears nicely in monochrome).

shagie 20 hours ago | parent | next [-]

To the "the grandfather of all luminance coefficients" ... https://www.earlytelevision.org/pdf/ntsc_signal_specificatio... from 1953.

Page 5 has:

    Eq' = 0.41 (Eb' - Ey') + 0.48 (Er' - Ey')
    Ei' = -0.27(Eb' - Ey') + 0.74 (Er' - Ey')
    Ey' = 0.30Er' + 0.59Eg' + 0.11Eb'
The last equation are those coefficients.
zinekeller 20 hours ago | parent [-]

I was actually researching why PAL YUV has the same(-ish) coefficients, while forgetting that PAL is essentially a refinement of the NTSC color standard (PAL stands for phase-alternating line, which solves much of NTSC's color drift issues early in its life).

adrian_b 12 hours ago | parent [-]

It is the choice of the 3 primary colors and of the white point which determines the coefficients.

PAL and SECAM use different color primaries than the original NTSC, and a different white, which lead to different coefficients.

However, the original color primaries and white used by NTSC had become obsolete very quickly so they no longer corresponded with what the TV sets could actually reproduce.

Eventually even for NTSC a set of primary colors was used that was close to that of PAL/SECAM, which was much later standardized by SMPTE in 1987. The NTSC broadcast signal continued to use the original formula, for backwards compatibility, but the equipment processed the colors according to the updated primaries.

In 1990, Rec. 709 has standardized a set of primaries intermediate between those of PAL/SECAM and of SMPTE, which was later also adopted by sRGB.

zinekeller 10 hours ago | parent [-]

Worse, "NTSC" is not a single standard, Japan deviated it too much that the primaries are defined by their own ARIB (notably ~9000 K white point).

... okay, technically PAL and SECAM too, but only in audio (analogue Zweikanalton versus digital NICAM), bandwidth placement (channel plan and relative placement of audio and video signals, and, uhm, teletext) and, uhm, teletext standard (French Antiope versus Britain's Teletext and Fastext).

zinekeller 10 hours ago | parent [-]

(this is just a rant)

Honestly, the weird 16-239 (on 8-bit) color range and 60000/1001 fps limitations stem from the original NTSC standard, which considering both the Japanese NTSC adaptation and European standards do not have is rather frustating nowadays. Both the HDVS and HD-MAC standards define it in precise ways (exactly 60 fps for HDVS and 0-255 color range for HD-MAC*) but America being America...

* I know that HD-MAC is analog(ue), but it has an explicit digital step for transmission and it uses the whole 8 bits for the conversion!

reactordev 8 hours ago | parent [-]

Ya’ll are a gold mine. Thank you. I only knew it from my forays into computer graphics and making things look right on (now older) LCD TV’s.

I pulled it from some old academia papers about why you can’t just max(uv.rgb) to do greyscale nor can you do float val = uv.r

This further gets funky when we have BGR vs RGB and have to swivel the bytes beforehand.

Thanks for adding clarity and history to where those weights came from, why they exist at all, and the decision tree that got us there.

People don’t realize how many man hours went into those early decisions.

shagie 7 hours ago | parent [-]

> People don’t realize how many man hours went into those early decisions.

In my "trying to hunt down the earliest reference for the coefficients" I came across "Television standards and practice; selected papers from the Proceedings of the National television system committee and its panels" at https://archive.org/details/televisionstanda00natirich/mode/... which you may enjoy. The "problem" in trying to find the NTSC color values is that the collection of papers is from 1943... and color TV didn't become available until the 50s (there is some mention of color but I couldn't find it) - most of the questions of color are phrased with "should".

reactordev 6 hours ago | parent [-]

This is why I love graphics and game engines. It's this focal point of computer science, art, color theory, physics, practical implications for other systems around the globe, and humanities.

I kept a journal as a teenager when I started and later digitized it when I was in my 20s. The biggest impact was mostly SIGGRAPH papers that are now available online such as "Color Gamut Transform Pairs" (https://www.researchgate.net/publication/233784968_Color_Gam...).

I bought all the GPU Gems books, all the ShaderX books (shout out to Wolfgang Engel, his books helped me tremendously), and all the GPU pro books. Most of these are available online now but I had sagging bookshelves full of this stuff in my 20s.

Now in my late 40s, I live like an old japanese man with minimalism and very little clutter. All my readings are digital, iPad-consumable. All my work is online, cloud based or VDI or ssh away. I still enjoy learning but I feel like because I don't have a prestigious degree in the subject, it's better to let others teach it. I'm just glad I was able to build something with that knowledge and release it into the world.

ycombiredd 20 hours ago | parent | prev [-]

Cool. I could have been clearer in my post; as I understand it actual NTSC circuitry used different coefficients for RGBx and RGBy values, and I didn't take time to look up the official standard. My specific pondering was based on an assumption that neither the ppm2pgm formula nor the parent's "NTSC" formula were exact equivalents to NTSC, and my "ADHD" thoughts wondered about the provenance of how each poster came to use their respective approximations. While I write this, I realize that my actual ponderings are less interesting than the responses generated because of them, so thanks everyone for your insightful responses.

reactordev 19 hours ago | parent [-]

There are no stupid questions, only stupid answers. It’s questions that help us understand and knowledge is power.

reactordev 21 hours ago | parent | prev [-]

I’m sure it has its roots in amiga or TV broadcasting. ppm2pgm is old school too so we all tended to use the same defaults.

Like q3_sqrt

liampulles 15 hours ago | parent | prev | next [-]

The bit about the green over-representation in camera color filters is partially correct. Human color sensitivity varies a lot from individual to individual (and not just amongst individuals with color blindness), but general statistics indicate we are most sensitive to red light.

The main reason is that green does indeed overwhelmingly contribute to perceptual luminance (over 70% in sRGB once gamma corrected: https://www.w3.org/TR/WCAG20/#relativeluminancedef) and modern demosaicking algorithms will rely on both derived luminance and chroma information to get a good result (and increasingly spatial information, e.g. "is this region of the image a vertical edge").

Small neural networks I believe are the current state of the art (e.g. train to reverse a 16x16 color filter pattern for the given camera). What is currently in use by modern digital cameras is all trade secret stuff.

kuschku 9 hours ago | parent | next [-]

> Small neural networks I believe are the current state of the art (e.g. train to reverse a 16x16 color filter pattern for the given camera). What is currently in use by modern digital cameras is all trade secret stuff.

Considering you usually shoot RAW, and debayer and process in post, the camera hasn't done any of that.

It's only smartphones that might be doing internal AI Debayering, but they're already hallucinating most of the image anyway.

liampulles an hour ago | parent | next [-]

Sure - if you don't want to do demosaicing on the camera, that's fine. It doesn't mean there is not an algorithm there as an option.

If you care about trying to get an image that is as accurate as possible to the scene, then it is well within your interest to use a Convolutional Neural Network based algorithm, since these are amongst the highest performing in terms of measured PSNR (which is what nearly all demosaicing algorithms in academia are measured on). You are maybe thinking of generative AI?

15155 8 hours ago | parent | prev [-]

Yes, people usually shoot RAW (anyone spending this much on a camera knows better) - but these cameras default to JPEG and often have dual-capture (RAW+JPEG) modes.

NooneAtAll3 14 hours ago | parent | prev | next [-]

> we are most sensitive to red light

> green does indeed overwhelmingly contribute to perceptual luminance

so... if luminance contribution is different from "sensitivity" to you - what do you imply by sensitivity?

liampulles 12 hours ago | parent [-]

Upon further reading, I think I am wrong here. My confusion was that I read that over 60% of the cones in ones eye are "red" cones (which is a bad generalization), and there is more nuance here.

Given equal power red, blue, or green light hitting our eyes, humans tend to rate green "brighter" in pairwise comparative surveys. That is why it is predominant in a perceptual luminance calculation converting from RGB.

Though there are much more L-cones (which react most strongly to "yellow" light, not "red", also "much more" varies across individuals) than M-cones (which react most strongly to a "greenish cyan"), the combination of these two cones (which make ~95% of the cones in the eye) mean that we are able to sense green light much more efficiently than other wavelengths. S-cones (which react most strongly to "purple") are very sparse.

skinwill 7 hours ago | parent [-]

This is way over simplifying here but I always understood it as: our eyes can see red with very little power needed. But our eyes can differentiate more detail with green.

devsda 12 hours ago | parent | prev [-]

Is it related to the fact that monkeys/humans evolved around dense green forests ?

frumiousirc 11 hours ago | parent | next [-]

Well, plants and eyes long predate apes.

Water is most transparent in the middle of the "visible" spectrum (green). It absorbs red and scatters blue. The atmosphere has a lot of water as does, of course, the ocean which was the birth place of plants and eyeballs.

It would be natural for both plants and eyes to evolve to exploit the fact that there is a green notch in the water transparency curve.

Edit: after scrolling, I find more discussion on this below.

seba_dos1 8 hours ago | parent [-]

Eyes aren't all equal. Our trichromacy is fairly rare in the world of animals.

zuminator 5 hours ago | parent | prev [-]

I think any explanation along those lines would have a "just-so" aspect to it. How would we go about verifying such a thing? Perhaps if we compared and contrasted the eyes of savanna apes to forest apes, and saw a difference, which to my knowledge We do not. Anyway, sunlight at the ground level peaks around 555nm, so it's believed that we're optimizing to that by being more sensitive to green.

brookst a day ago | parent | prev | next [-]

Even old school chemical films were the same thing, just different domain.

There is no such thing as “unprocessed” data, at least that we can perceive.

kdazzle 19 hours ago | parent | next [-]

Exactly - film photographers heavily process(ed) their images from the film processing through to the print. Ansel Adams wrote a few books on the topic and they’re great reads.

And different films and photo papers can have totally different looks, defined by the chemistry of the manufacturer and however _they_ want things to look.

acomjean 18 hours ago | parent | next [-]

Excepting slide photos. No real adjustment once taken (a more difficult medium than negative film which you can adjust a little when printing)

You’re right about Ansel Adams. He “dodged and burned” extensively (lightened and darkened areas when printing.) Photoshop kept the dodge and burn names on some tools for a while.

https://m.youtube.com/watch?v=IoCtni-WWVs

When we printed for our college paper we had a dial that could adjust the printed contrast a bit of our black and white “multigrade” paper (it added red light). People would mess with the processing to get different results too (cold/ sepia toned). It was hard to get exactly what you wanted and I kind of see why digital took over.

macintux 8 hours ago | parent [-]

I found one way to "adjust" slide photos: I accidentally processed a (color) roll of mine using C-41. The result was surprisingly not terrible.

NordSteve 6 hours ago | parent | prev [-]

A school photography company I worked for used a custom Kodak stock. They were unsatisfied with how Kodak's standard portrait film handled darker skin tones.

They were super careful to maintain the look across the transition from film to digital capture. Families display multiple years of school photos next to each other and they wanted a consistent look.

adrian_b 12 hours ago | parent | prev [-]

True, but there may be different intentions behind the processing.

Sometimes the processing has only the goal to compensate the defects of the image sensor and of the optical elements, in order to obtain the most accurate information about the light originally coming from the scene.

Other times the goal of the processing is just to obtain an image that appears best to the photographer, for some reason.

For casual photographers, the latter goal is typical, but in scientific or technical applications the former goal is frequently encountered.

Ideally, a "raw" image format is one where the differences between it and the original image are well characterized and there are no additional unknown image changes done for an "artistic" effect, in order to allow further processing when having either one of the previously enumerated goals.

JumpCrisscross 17 hours ago | parent | prev | next [-]

> modern photography is just signal processing with better marketing

I pass on a gift I learned of from HN: Susan Sunday’s “On Photography”.

raphman 15 hours ago | parent [-]

Thanks! First hit online: https://www.lab404.com/3741/readings/sontag.pdf

Out of curiosity: what led you to write "Susan Sunday" instead of "Susan Sontag"? (for other readers: "Sonntag" is German for "Sunday")

JumpCrisscross 3 hours ago | parent [-]

> Out of curiosity: what led you to write "Susan Sunday" instead of "Susan Sontag"?

Grew up speaking German and Sunday-night brain did a substitution.

mradalbert 13 hours ago | parent | prev | next [-]

Also worth noting that manufacturers advertise photodiode count as a sensor resolution. So if you have 12 Mp sensor then your green resolution is 6 Mp and blue and red are 3 Mp

yzydserd 12 hours ago | parent | prev | next [-]

Another tangent. Bryce Bayer is the dad of a HN poster. https://news.ycombinator.com/item?id=12111995 https://news.ycombinator.com/item?id=36043826

integralid 16 hours ago | parent | prev | next [-]

And this is just what happens for a single frame. It doesn't even touch computational photography[1].

[1] https://dpreview.com/articles/9828658229/computational-photo...

cataflam 12 hours ago | parent [-]

Great series of articles!

mwambua 17 hours ago | parent | prev | next [-]

> The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data

How does this affect luminance perception for deuteranopes? (Since their color blindness is caused by a deficiency of the cones that detect green wavelengths)

fleabitdev 13 hours ago | parent | next [-]

Protanopia and protanomaly shift luminance perception away from the longest wavelengths of visible light, which causes highly-saturated red colours to appear dark or black. Deuteranopia and deuteranomaly don't have this effect. [1]

Blue cones make little or no contribution to luminance. Red cones are sensitive across the full spectrum of visual light, but green cones have no sensitivity to the longest wavelengths [2]. Since protans don't have the "hardware" to sense long wavelengths, it's inevitable that they'd have unusual luminance perception.

I'm not sure why deutans have such a normal luminous efficiency curve (and I can't find anything in a quick literature search), but it must involve the blue cones, because there's no way to produce that curve from the red-cone response alone.

[1]: https://en.wikipedia.org/wiki/Luminous_efficiency_function#C...

[2]: https://commons.wikimedia.org/wiki/File:Cone-fundamentals-wi...

doubletwoyou 15 hours ago | parent | prev | next [-]

The cones are the colour sensitive portion of the retina, but only make up a small percent of all the light detecting cells. The rods (more or less the brightness detecting cells) would still function in a deuteranopic person, so their luminance perception would basically be unaffected.

Also there’s something to be said about the fact that the eye is a squishy analog device, and so even if the medium wavelengths cones are deficient, long wavelength cones (red-ish) have overlap in their light sensitivities along with medium cones so…

fleabitdev 13 hours ago | parent [-]

The rods are only active in low-light conditions; they're fully active under the moon and stars, or partially active under a dim street light. Under normal lighting conditions, every rod is fully saturated, so they make no contribution to vision. (Some recent papers have pushed back against this orthodox model of rods and cones, but it's good enough for practical use.)

This assumption that rods are "the luminance cells" is an easy mistake to make. It's particularly annoying that the rods have a sensitivity peak between the blue and green cones [1], so it feels like they should contribute to colour perception, but they just don't.

[1]: https://en.wikipedia.org/wiki/Rod_cell#/media/File:Cone-abso...

volemo 15 hours ago | parent | prev [-]

It’s not that their M-cones (middle, i.e. green) don’t work at all, their M-cones responsivity curve is just shifted to be less distinguishable from their L-cones curve, so they effectively have double (or more) the “red sensors”.

f1shy 16 hours ago | parent | prev | next [-]

> The human eye is most sensitive to green light,

This argument is very confusing: if is most sensitive, less intensity/area should be necessary, not more.

Lvl999Noob 15 hours ago | parent | next [-]

Since the human eye is most sensitive to green, it will find errors in the green channel much easier than the others. This is why you need _more_ green data.

afiori 8 hours ago | parent | prev | next [-]

Because that reasoning applies to binary signals, where the sensibility is about detection, in the case of our eyes sensibility means that we can detect many more distinct values let's say we can see N distinct luminosity levels of monochrome green light but only N*k or N^k distinct levels of blue light.

So to describe/reproduce what our eyes see you need more detection range in the green spectrum

gudzpoz 15 hours ago | parent | prev | next [-]

Note that there are two measurement systems involved: first the camera, and then the human eyes. Your reasoning could be correct if there were only one: "the sensor is most sensitive to green light, so less sensor area is needed".

But it is not the case, we are first measuring with cameras, and then presenting the image to human eyes. Being more sensitive to a colour means that the same measurement error will lead to more observable artifacts. So to maximize visual authenticity, the best we can do is to make our cameras as sensitive to green light (relatively) as human eyes.

f1shy 6 hours ago | parent [-]

Oh you are right! I’m so dumb! Of course it is the camera. To have the camera have the same sensitivity, we need more green pixels! I had my neurons off. Thanks.

matsemann 11 hours ago | parent | prev [-]

Yeah, was thinking the same. If we're more sensitive, why do we need double sensors? Just have 1:1:1, and we would anyways see more of the green? Won't it be too much if we do 1:2:1, when we're already more perceptible to green?

seba_dos1 7 hours ago | parent [-]

With 1:1:1 the matrix isn't square, and if you have to double one of the channels for practical purposes then the green one is the obvious pick as it's the most beneficial in increasing the image quality cause it's increasing the spatial resolution where our eyes can actually notice it.

Grab a random photo and blur its blue channel out a bit. You probably won't notice much difference aside of some slight discoloration. Then try the same with the green channel.

dheera a day ago | parent | prev | next [-]

This is also why I absolute hate, hate, hate it when people ask me whether I "edited" a photo or whether a photo is "original", as if trying to explain away nice-looking images as if they are fake.

The JPEGs cameras produce are heavily processed, and they are emphatically NOT "original". Taking manual control of that process to produce an alternative JPEG with different curves, mappings, calibrations, is not a crime.

beezle 19 hours ago | parent | next [-]

As a mostly amateur photographer, it doesn't bother me if people ask that question. While I understand the point that the camera itself may be making some 'editing' type decision on the data first, a) in theory each camera maker has attempted to calibrate the output to some standard, b) public would expect two photos taken at same time with same model camera should look identical. That differs greatly from what often can happen in "post production" editing - you'll never find two that are identical.

vladvasiliu 12 hours ago | parent | next [-]

> public would expect two photos taken at same time with same model camera should look identical

But this is wrong. My not-too-exotic 9-year-old camera has a bunch of settings which affect the resulting image quite a bit. Without going into "picture styles", or "recipes", or whatever they're called these days, I can alter saturation, contrast, and white balance (I can even tell it to add a fixed alteration to the auto WB and tell it to "keep warm colors"). And all these settings will alter how the in-camera produced JPEG will look, no external editing required at all.

So if two people are sitting in the same spot with the same camera, who's to say they both set them up identically? And if they didn't, which produces the "non-processed" one?

I think the point is that the public doesn't really understand how these things work. Even without going to the lengths described by another commenter (local adjust so that there appears to be a ray of light in that particular spot, remove things, etc), just playing with the curves will make people think "it's processed". And what I described above is precisely what the camera itself does. So why is there a difference if I do it manually after the fact or if I tell the camera to do it for me?

integralid 16 hours ago | parent | prev [-]

You and other responders to GP disagree with TFA:

>There’s nothing that happens when you adjust the contrast or white balance in editing software that the camera hasn’t done under the hood. The edited image isn’t “faker” then the original: they are different renditions of the same data.

gorgolo 14 hours ago | parent | prev | next [-]

I noticed this a lot when taking pictures in the mountains.

I used to have a high resolution phone camera from a cheaper phone and then later switched to an iPhone. The latter produced much nicer pictures, my old phone just produces very flat-looking pictures.

People say that the iPhone camera automatically edits the images to look better. And in a way I notice that too. But that’s the wrong way of looking at it; the more-edited picture from the iPhone actually corrresponds more to my perception when I’m actually looking at the scene. The white of the snow and glaciers and the deep blue sky really does look amazing in real life, and when my old phone captured it into a flat and disappointing looking photo with less postprocessing than an iPhone, it genuinely failed to capture what I can see with my eyes. And the more vibrant post processed colours of an iPhone really do look more like what I think I’m looking at.

dsego 13 hours ago | parent | prev | next [-]

I don't think it's the same, for me personally I don't like heavily processed images. But not in the sense that they need processing to look decent or to convey the perception of what it was like in real life, more in the sense that the edits change the reality in a significant way so it affects the mood and the experience. For example, you take a photo on a drab cloudy day, but then edit the white balance to make it seem like golden hour, or brighten a part to make it seems like a ray of light was hitting that spot. Adjusting the exposure, touching up slightly, that's all fine, depending on what you are trying to achieve of course. But what I see on instagram or shorts these days is people comparing their raws and edited photos, and without the edits the composition and subject would be just mediocre and uninteresting.

gorgolo 11 hours ago | parent | next [-]

The “raw” and unedited photo can be just as or even more unrealistic than the edited one though.

Photographs can drop a lot of the perspective, feeling and colour you experience when you’re there. When you take a picture of a slope on a mountain for example (on a ski piste for example), it always looks much less impressive and steep on a phone camera. Same with colours. You can be watching an amazing scene in the mountains, but when you take a photo with most cameras, the colours are more dull, and it just looks flatter. If a filter enhances it and makes it feel as vibrant as the real life view, I’d argue you are making it more realistic.

The main message I get from OP’s post is precisely that there is no “real unfiltered / unedited image”, you’re always imperfectly capturing something your eyes see, but with a different balance of colours, different detector sensitivity to a real eye etc… and some degree of postprocessing is always required make it match what you see in real life.

foldr 11 hours ago | parent | prev [-]

This is nothing new. For example, Ansel Adams’s famous Moonrise, Hernandez photo required extensive darkroom manipulations to achieve the intended effect:

https://www.winecountry.camera/blog/2021/11/1/moonrise-80-ye...

Most great photos have mediocre and uninteresting subjects. It’s all in the decisions the photographer makes about how to render the final image.

to11mtm a day ago | parent | prev | next [-]

JPEG with OOC processing is different from JPEG OOPC (out-of-phone-camera) processing. Thank Samsung for forcing the need to differentiate.

seba_dos1 a day ago | parent [-]

I wrote the raw Bayer to JPEG pipeline used by the phone I write this comment on. The choices on how to interpret the data are mine. Can I tweak these afterwards? :)

Uncorrelated 15 hours ago | parent | next [-]

I found the article you wrote on processing Librem 5 photos:

https://puri.sm/posts/librem-5-photo-processing-tutorial/

Which is a pleasant read, and I like the pictures. Has the Librem 5's automatic JPEG output improved since you wrote the post about photography in Croatia (https://dosowisko.net/l5/photos/)?

seba_dos1 11 hours ago | parent [-]

Yes, these are quite old. I've written a GLSL shader that acts as a simple ISP capable of real-time video processing and described it in detail here: https://source.puri.sm/-/snippets/1223

It's still pretty basic compared to hardware accelerated state-of-the-art, but I think it produces decent output in a fraction of a second on the device itself, which isn't exactly a powerhouse: https://social.librem.one/@dos/115091388610379313

Before that, I had an app for offline processing that was calling darktable-cli on the phone, but it took about 30 seconds to process a single photo with it :)

to11mtm a day ago | parent | prev [-]

I mean it depends, does your Bayer-to-JPEG pipeline try to detect things like 'this is a zoomed in picture of the moon' and then do auto-fixup to put a perfect moon image there? That's why there's some need to differentiate between SOOC's now, because Samsung did that.

I know my Sony gear can't call out to AI because the WIFI sucks like every other Sony product and barely works inside my house, but also I know the first ILC manufacturer that tries to put AI right into RAW files is probably the first to leave part of the photography market.

That said I'm a purist to the point where I always offer RAWs for my work [0] and don't do any photoshop/etc. D/A, horizon, bright adjust/crop to taste.

Where phones can possibly do better is the smaller size and true MP structure of a cell phone camera sensor, makes it easier to handle things like motion blur. and rolling shutter.

But, I have yet to see anything that gets closer to an ILC for true quality than the decade+ old pureview cameras on Nokia cameras, probably partially because they often had sensors large enough.

There's only so much computation can do to simulate true physics.

[0] - I've found people -like- that. TBH, it helps that I tend to work cheap or for barter type jobs in that scene, however it winds up being something where I've gotten repeat work because they found me and a 'photoshop person' was cheaper than getting an AIO pro.

fc417fc802 20 hours ago | parent | prev | next [-]

There's a difference between an unbiased (roughly speaking) pipeline and what (for example) JBIG2 did. The latter counts as "editing" and "fake" as far as I'm concerned. It may not be a crime but at least personally I think it's inherently dishonest to attempt to play such things off as "original".

And then there's all the nonsense BigTech enables out of the box today with automated AI touch ups. That definitely qualifies as fakery although the end result may be visually pleasing and some people might find it desirable.

make3 21 hours ago | parent | prev [-]

it's not a crime but applying post processing in an overly generous way that goes a lot further than replicating what a human sees does take away from what makes pictures interesting imho vs other mediums, that it's a genuine representation of something that actually happened.

if you take that away, a picture is not very interesting, it's hyperrealistic so not super creative a lot of the time (compared to eg paintings), & it doesn't even require the mastery of other mediums to get hyperrealistism

Eisenstein 21 hours ago | parent [-]

Do you also want the IR light to be in there? That would make it more of 'genuine representation'.

BenjiWiebe 21 hours ago | parent | next [-]

Wouldn't be a genuine version of what my eyes would've seen, had I been the one looking instead of the camera.

I can't see infrared.

ssl-3 19 hours ago | parent | next [-]

Perhaps interestingly, many/most digital cameras are sensitive to IR and can record, for example, the LEDs of an infrared TV remote.

But they don't see it as IR. Instead, this infrared information just kind of irrevocably leaks into the RGB channels that we do perceive. With the unmodified camera on my Samsung phone, IR shows up kind of purple-ish. Which is... well... it's fake. Making invisible IR into visible purple is an artificially-produced artifact of the process that results in me being able to see things that are normally ~impossible for me to observe with my eyeballs.

When you generate your own "genuine" images using your digital camera(s), do you use an external IR filter? Or are you satisfied with knowing that the results are fake?

lefra 14 hours ago | parent [-]

Silicon sensors (which is what you'll get in all visible-light cameras as far as I know) are all very sensitive to near-IR. Their peak sensitivity is around 900nm. The difference between cameras that can see or not see IR is the quality of their anti-IR filter.

Your Samsung phone probably has the green filter of its bayer matrix that blocks IR better than the blue and red ones.

Here's a random spectral sensitivity for a silicon sensor:

https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRkffHX...

Eisenstein 20 hours ago | parent | prev [-]

But the camera is trying to emulate how it would look if your eyes were seeing it. In order for it to be 'genuine' you would need not only the camera to genuine, but also the OS, the video driver, the viewing app, the display and the image format/compression. They all do things to the image that are not genuine.

make3 17 hours ago | parent | prev [-]

"of what I would've seen"

jamilton 18 hours ago | parent | prev | next [-]

Why that ratio in particular? I wonder if there’s a more complex ratio that could be better.

shiandow 13 hours ago | parent [-]

This ratio allows for a relatively simple 2x2 repeating pattern. That makes interpolating the values immensely simpler.

Also you don't want the red and blue to be too far apart, reconstructing the colour signal is difficult enough as it is. Moire effects are only going to get worse if you use an even sparser resolution.

formerly_proven 12 hours ago | parent | prev | next [-]

> It really highlights that modern photography is just signal processing with better marketing.

Showing linear sensor data on a logarithmic output device to show how hard images are processed is an (often featured) sleight of hand, however.

thousand_nights a day ago | parent | prev | next [-]

the bayer pattern is one of those things that makes me irrationally angry, in the true sense, based on my ignorance of the subject

what's so special about green? oh so just because our eyes are more sensitive to green we should dedicate double the area to green in camera sensors? i mean, probably yes. but still. (⩺_⩹)

MyOutfitIsVague 21 hours ago | parent | next [-]

Green is in the center of the visible spectrum of light (notice the G in the middle of ROYGBIV), so evolution should theoretically optimize for green light absorption. An interesting article on why plants typically reflect that wavelength and absorb the others: https://en.wikipedia.org/wiki/Purple_Earth_hypothesis

bmitc 19 hours ago | parent [-]

Green is the highest energy light emitted by our sun, from any part of the entire light spectrum, which is why green appears in the middle of the visible spectrum. The visible spectrum basically exists because we "grew up" with a sun that blasts that frequency range more than any other part of the light spectrum.

imoverclocked 19 hours ago | parent | next [-]

I have to wonder what our planet would look like if the spectrum shifts over time. Would plants also shift their reflected light? Would eyes subtly change across species? Of course, there would probably be larger issues at play around having a survivable environment … but still, fun to ponder.

cycomanic 13 hours ago | parent | prev [-]

That comment does not make sense. Do you mean the sun emits it's peak intensity at green (I don't believe that is true either, but at least it would make a physically sensical statement). To clarify why the statement does not make sense, the energy of light is directly proportional to its frequency so saying that green is the highest energy light the sun emits is saying the sun does not emit any light at frequency higher than green, i.e. no blue light no UV... That's obviously not true.

antonvs 10 hours ago | parent [-]

> Do you mean the sun emits its peak intensity at green

That's presumably what they mean. It's more or less true, except the color in question is at the green / yellow transition.

See e.g. https://s3-us-west-2.amazonaws.com/courses-images-archive-re...

milleramp 21 hours ago | parent | prev | next [-]

Several reasons, -Silicon efficiency (QE) peaks in the green -Green spectral response curve is close to the luminance curve humans see, like you said. -Twice the pixels to increase the effective resolution in the green/luminance channel, color channels in YUV contribute almost no details.

Why is YUV or other luminance-chrominance color spaces important for a RGB input? Because many processing steps and encoders, work in YUV colorspaces. This wasn't really covered in the article.

shiandow 12 hours ago | parent | prev | next [-]

You think that's bad? Imagine finding out that all video still encodes colour at half resolution simply because that is how analog tv worked.

seba_dos1 7 hours ago | parent | next [-]

I don't think that's correct. It's not "all video" - you can easily encode video without chroma subsampling - and it's not because this is how analog TV worked, but rather for the same reason why analog TV worked this way, which is the fact that it lets you encode significantly less data with barely noticeable quality loss. JPEGs do the same thing.

shiandow 3 hours ago | parent [-]

It's a very crude method, with modern codecs I would be very surprised if you didn't get a better image just encoding the chroma at a lower bitrate.

heckelson 7 hours ago | parent | prev [-]

Isn't it the other way round? We did and still do chroma subsampling _because_ we don't see that much of a difference?

Renaud 18 hours ago | parent | prev | next [-]

Not sure why it would invoke such strong sentiments but if you don’t like the bayer filter, know that some true monochrome cameras don’t use it and make every sensor pixel available to the final image.

For instance, the Leica M series have specific monochrome versions with huge resolutions and better monochrome rendering.

You can also modify some cameras and remove the filter, but the results usually need processing. A side effect is that the now exposed sensor is more sensitive to both ends of the spectrum.

NetMageSCW 17 hours ago | parent [-]

Not to mention that there are non-Bayer cameras that vary from the Sigma Foveon and Quattro sensors that use stacked sensors to filter out color entirely differently to the Fuji EXR and X-Trans sensors.

japanuspus 13 hours ago | parent | prev [-]

If the Bayer pattern makes you angry, I imagine it would really piss you off to realize that the whole concept encoding an experienced color by a finite number of component colors is fundamentally species-specific and tied to the details of our specific color sensors.

To truly record an appearance without reference to the sensory system of our species, you would need to encode the full electromagnetic spectrum from each point. Even then, you would still need to decide on a cutoff for the spectrum.

...and hope that nobody ever told you about coherence phenomena.

bstsb a day ago | parent | prev [-]

hey, not accusing you of anything (bad assumptions don't lead to a conducive conversation) but did you use AI to write or assist with this comment?

this is totally out of my own self-interest, no problems with its content

sho_hn a day ago | parent | next [-]

Upon inspection, the author's personal website used em dashes in 2023. I hope this helped with your witch hunt.

I'm imagining a sort of Logan's Run-like scifi setup where only people with a documented em dash before November 30, 2022, i.e. D(ash)-day, are left with permission to write.

brookst a day ago | parent | next [-]

Phew. I have published work with em dashes, bulleted lists, “not just X, but Y” phrasing, and the use of “certainly”, all from the 90’s. Feel sorry for the kids, but I got mine.

qingcharles 15 hours ago | parent [-]

I'm grandfathered in too. RIP the hyphen crew.

mr_toad 21 hours ago | parent | prev | next [-]

> I'm imagining a sort of Logan's Run-like scifi setup where only people with a documented em dash before November 30, 2022, i.e. D(ash)-day, are left with permission to write.

At least Robespierre needed two sentences before condemning a man. Now the mob is lynching people on the basis of a single glyph.

ozim 15 hours ago | parent | prev | next [-]

I started to use — dash so that algos skip my writing thinking they were AI generated.

bstsb 12 hours ago | parent | prev [-]

wasn't talking about the em dashes (i use them myself) but thanks anyway :)

ekidd a day ago | parent | prev | next [-]

I have been overusing em dashes and bulleted lists since the actual 80s, I'm sad to say. I spent much of the 90s manually typing "smart" quotes.

I have actually been deliberately modifying my long-time writing style and use of punctuation to look less like an LLM. I'm not sure how I feel about this.

disillusioned a day ago | parent [-]

Alt + 0151, baby! Or... however you do it on MacOS.

But now, likewise, having to bail on emdashes. My last differentiator is that I always close set the emdash—no spaces on either side, whereas ChatGPT typically opens them (AP Style).

piskov a day ago | parent | next [-]

Just use some typography layout with a separate layer. Eg “right alt” plus “-” for m-dash

Russians use this for at least 15 years

https://ilyabirman.ru/typography-layout/

qingcharles 15 hours ago | parent | prev | next [-]

I'm a savage, I just copy-paste them from Unicode sites.

ksherlock a day ago | parent | prev | next [-]

On the mac you just type — for an em dash or – for an en dash.

xp84 17 hours ago | parent [-]

Is this a troll?

But anyway, it’s option-hyphen for a en-dash and opt-shift-hyphen for the em-dash.

I also just stopped using them a couple years ago when the meme about AI using them picked up steam.

21 hours ago | parent | prev [-]
[deleted]
ajkjk a day ago | parent | prev [-]

found the guy who didn't know about em dashes before this year

also your question implies a bad assumption even if you disclaim it. if you don't want to imply a bad assumption the way to do that is to not say the words, not disclaim them

bstsb 12 hours ago | parent | next [-]

didn't even notice the em dashes to be honest, i noticed the contrast framing in the second paragraph and the "It's impressive how" for its conclusion.

as for the "assumption" bit, yeah fair enough. was just curious of AI usage online, this wasn't meant to be a dig at anyone as i know people use it for translations, cleaning up prose etc

barishnamazov 11 hours ago | parent [-]

No offense taken, but realize that good number of us folks who have learned English as a second language have been taught in this way (especially in an academic setting). LLMs' writing are like that of people, not the other way around.

reactordev a day ago | parent | prev [-]

The hatred mostly comes from TTS models not properly pausing for them.

“NO EM DASHES” is common system prompt behavior.

xp84 17 hours ago | parent [-]

You know, I didn’t think about that, but you’re right. I have seen so many AI narrations where it reads the dash exactly like a hyphen, actually maybe slightly reducing the inter-word gap. Odd the kinds of “easy” things such as complicated and advanced system gets wrong.