▲ | cameronh90 5 days ago | |
Digital has never been light-to-pixel. At a minimum, you have demosaicing, dark frame subtraction, and some form of tone mapping just to produce anything you'd recognise as an photo. Then to produce a half-way acceptable image will involve denoising, sharpening, dewarping, chromatic aberration correction - and that just gets us up to what was normal at the turn of the millennium. Nowadays without automatic bracketing and stacking, digital image stabilisation, rolling shutter reduction, and much more, you're going to have pretty disappointing phone pics. I suspect you're trying to draw a distinction with the older predictable techniques of turning sensor data into an image when compared to the modern impenetrable ones that can hallucinate. I know what you're getting at, but there's not really a clear point where one becomes the other. You can consider demosaicing and "super-res zoom" as both types of super-resolution technique intended to convert large amounts of raw sensor data into image that's closer to the ground truth. I've even seen some pretty crazy stuff introduced by an old fashioned Lanczos-resampling based demosaicing filter. Albeit, not Ryan Gosling[0]. Of course, if you don't like any of this, you can configure phones to produce RAW output, or even pick up a mirrorless, and take full control of the processing pipeline. I've been out of the photography world for a while so I'm probably out of date now, but I don't think DNGs can even store all of the raw data that is now used by Apple/Google in their image processing pipelines. Certainly, I never had much luck turning those RAW files into anything that looked good. Apple have ProRAW which I think is some sort of hybrid format but I don't really understand it. [0] https://petapixel.com/2020/08/17/gigapixel-ai-accidentally-a... | ||
▲ | hatthew 5 days ago | parent | next [-] | |
By my understanding, demosaicing almost always just "blurs" the photo slightly, reducing high-frequency information. Tone mapping is unavoidable, invisible to most people, and usually doesn't change the semantic information within an image (the famous counterexample is of course The Dress). Phone cameras in recent years do additional processing to saturate, sharpen, HDR, etc., and I find those distasteful and will happily argue against them. But AI upscaling/enhancement is a step further, and to me feels like a very big step further. It's the first time that an automatic processing step has a very high risk of introducing new (and often incorrect) semantic information that is not available in the original image, the classic example being the samsung moon. | ||
▲ | ge96 5 days ago | parent | prev | next [-] | |
It's just crazy that demo they show, imagine the vehicle is actually a truck but you zoom in and it becomes a porsche... conspiracy tangent, try to take a picture of something you're not supposed to and your phone won't let you ha, well money could be an example which I get the reason (it's printers but that idea) | ||
▲ | atomicthumbs 5 days ago | parent | prev [-] | |
the car looks mutated and slimy. most stuff that's used computational photography before now didn't imagine things from whole cloth |