Remix.run Logo
Etheryte 4 days ago

Are they not? Every modern camera does the same thing. Upscaling, denoising, deblurring, adjusting colors, bumping and dropping shadows and highlights, pretty much no aspect of the picture is the way the sensor sees it once the rest of the pipeline is done. Phone cameras do this to a more extreme degree than say pro cameras, but they all do it.

PittleyDunkin 4 days ago | parent | next [-]

To point out the obvious, film cameras don't, nor do many digital cameras. Unless you mean modern in the sense of "cameras you can buy from best buy right now", of course. But that isn't very interesting: best buy has terrible taste in cameras.

sega_sai 4 days ago | parent | next [-]

There are a lot of steps like that provided you want an image that you want to show to the user (i.e. Jpeg). You do have somehow merge the 3 Bayer filter detections on rectangular grid, which involves interpolation. You do have to subtract some sort of bias in a detector, possibly correct for different sensitivity across the detector. You have to map the raw 'electron counts' into Jpeg scale which involves another set of decisions/image processing steps

PittleyDunkin 4 days ago | parent [-]

There is clear processing in terms of interpreting the raw sensor data as you're describing. Then there are blurrier processes still, like "denoising" and "upscaling", which straddle the line between bias-correction and alteration. Then there's modification of actual color and luminance as the parent was describing. Now we're seeing full alterations applied automatically with neural nets, literally altering shapes and shadows and natural lighting phenomena.

I think it's useful to distinguish all of these even if they are desired. I really love my iPhone camera, but there's something deeply unsettling about how it alters the photos. It's fundamentally producing a different image you can get with either film or through your eyes. Naturally this is true for all digital sensors but we once could point out specifically how and why the resulting image differs from what our eyes see. It's no longer easy to even enumerate the possible alterations that go on via software, let alone control many of them, and I think there will be backlash at some point (or stated differently, a market for cameras that allow controlling this).

I've got to imagine it's frustrating for people who rely on their phone cameras for daily work to find out that upgrading a phone necessarily means relearning its foibles and adjusting how you shoot to accommodate it. Granted, I mostly take smartphone photos in situations where i'd rather not be neurotic about the result (candids, memories, reminders, etc) but surely there are professionals out there who can speak to this.

jakeogh 2 days ago | parent [-]

Interesting how there is no option to disable it.

strogonoff a day ago | parent [-]

iPhone’s camera supports raw output, so if you use an appropriate app with an appropriate option you can definitely disable most of the funky stuff the default app does.

However, it is likely the more you turn off, the more the physical constraints will show. A regular dumb camera with a big sensor provides much more space for deterministic creative processing.

kristjank 4 days ago | parent | prev [-]

Huh, I like your comment. It's such a nice way of pointing out someone equating marketability to quality.

cubefox 4 days ago | parent | prev | next [-]

"AI-powered image post-processing" is only done in smartphones I believe.

CharlesW 4 days ago | parent [-]

Not anymore. DSLR makers are already using AI (in-camera neural network processing) for things like upscaling and noise removal. https://www.digitalcameraworld.com/reviews/canon-eos-r1-revi...

"The Neural network Image Processing features in this camera are arguably even more important here than they are in the R5 Mark II. A combination of deep learning and algorithmic AI is used to power In-Camera Upscaling, which transforms the pedestrian-resolution 24.2MP images into pixel-packed 96MP photos – immediately outclassing every full-frame camera on the market, and effectively hitting GFX and Hasselblad territory.

"On top of that is High ISO Noise Reduction, which uses AI to denoise images by 2 stops. It works wonders when you're pushing those higher ISOs, which are already way cleaner than you'd expect thanks to the flagship image sensor and modest pixel count."

cubefox 3 days ago | parent [-]

I assume they will soon also do AI powered color, contrast and general lighting adjustments. Like smartphones.

stevenae 4 days ago | parent | prev [-]

Pro cameras do not do this to any degree.

Edit: by default.

vlabakje90 4 days ago | parent | next [-]

The cameras themselves might not, but in order to get a decent picture you will need to apply demosaicing and gamma correction in software at the very least, even with high end cameras.

gyomu 4 days ago | parent [-]

Right, and the point ppl are making upthread is that deterministic signal processing and probabilistic reconstruction approaches are apples and oranges.

oasisaimlessly 4 days ago | parent [-]

It's trivial to make most AI implementations deterministic; just use a constant RNG seed.

klabb3 3 days ago | parent [-]

Deterministic within a single image yes, but not within arbitrary subsections. Classical filters aren’t trying to reconstruct something resembling “other images it’s seen before”. Makes a difference both in theory and practice.

4 days ago | parent | prev [-]
[deleted]