▲ | astrange 3 days ago | |||||||
> the sensor readout for each rgb pixel Camera pixels are only one color at a time: GGRR BBGG (quad-Bayer; Fujifilm uses a weirder one called X-Trans. And some of them will be missing because they're damaged or are focus pixels.) And then you still have to do white balance and tone mapping, because your eyes do that and the camera sensor doesn't. | ||||||||
▲ | kjkjadksj 3 days ago | parent [-] | |||||||
There is a big difference between interpolation (dealing with the bayer or xtrans array and delivering a 3 layer image file in your choice of format and bit depth using your choice of algorithms), shooting for white balance or tone mapping with a color card and calibrated monitor if you care about that level of accuracy, and what Apple is doing which is black box ML subtly yassifying your images and garbling small printed text. Especially when the commenters use case is building out the family archive and not posting selfies on Instagram. | ||||||||
|