Remix.run Logo
MarkusWandel a day ago

But does applying the same transfer function to each pixel (of a given colour anyway) count as "processing"?

What bothers me as an old-school photographer is this. When you really pushed it with film (e.g. overprocess 400ISO B&W film to 1600 ISO and even then maybe underexpose at the enlargement step) you got nasty grain. But that was uniform "noise" all over the picture. Nowadays, noise reduction is impressive, but at the cost of sometimes changing the picture. For example, the IP cameras I have, sometimes when I come home on the bike, part of the wheel is missing, having been deleted by the algorithm as it struggled with the "grainy" asphalt driveway underneath.

Smartphone and dedicated digital still cameras aren't as drastic, but when zoomed in, or in low light, faces have a "painted" kind of look. I'd prefer honest noise, or better yet an adjustable denoising algorithm from "none" (grainy but honest) to what is now the default.

101008 21 hours ago | parent | next [-]

I hear you. Two years ago I went to my dad's and I spent the afternoon "scanning" old pictures of my grandparents (his parents), dead almost two decades ago. I took pictures of the physical photos, situating the phone as horizontal as possible (parallel to the picture), so it was as similar as a scan (to avoid perspective, reflection, etc).

It was my fault that I didn't check the pictures while I was doing it. Imagine my dissapointment when I checked them back at home: the Android camera decided to apply some kind of AI filter to all the pictures. Now my grandparents don't look like them at all, they are just an AI version.

krick 6 hours ago | parent [-]

What phone it was? I am sure that there is a lot of ML involved to figure out how to denoise photos in the dark, etc., but I never noticed anything that I'd want to describe as "AI filter" on my photos.

Aurornis 21 hours ago | parent | prev | next [-]

> For example, the IP cameras I have, sometimes when I come home on the bike, part of the wheel is missing, having been deleted by the algorithm as it struggled with the "grainy" asphalt driveway underneath.

Heavy denoising is necessary for cheap IP cameras because they use cheap sensors paired with high f-number optics. Since you have a photography background you'll understand the tradeoff that you'd have to make if you could only choose one lens and f-stop combination but you needed everything in every scene to be in focus.

You can get low-light IP cameras or manual focus cameras that do better.

The second factor is the video compression ratio. The more noise you let through, the higher bitrate needed to stream and archive the footage. Let too much noise through for a bitrate setting and the video codec will be ditching the noise for you, or you'll be swimming in macroblocks. There are IP cameras that let you turn up the bitrate and decrease the denoise setting like you want, but be prepared to watch your video storage times decrease dramatically as most of your bits go to storing that noise.

> Smartphone and dedicated digital still cameras aren't as drastic, but when zoomed in, or in low light, faces have a "painted" kind of look. I'd prefer honest noise, or better yet an adjustable denoising algorithm from "none" (grainy but honest) to what is now the default.

If you have an iPhone then getting a camera app like Halide and shooting in one of the RAW formats will let you do this and more. You can also choose Apple ProRAW on recent iPhone Pro models which is a little more processed, but still provides a large amount of raw image data to work with.

dahart 21 hours ago | parent | prev | next [-]

> does applying the same transfer function to each pixel (of a given colour anyway) count as “processing”?

This is interesting to think about, at least for us photo nerds. ;) I honestly think there are multiple right answers, but I have a specific one that I prefer. Applying the same transfer function to all pixels corresponds pretty tightly to film & paper exposure in analog photography. So one reasonable followup question is: did we count manually over- or under-exposing an analog photo to be manipulation or “processing”? Like you can’t see an image without exposing it, so even though there are timing & brightness recommendations for any given film or paper, generally speaking it’s not considered manipulation to expose it until it’s visible. Sometimes if we pushed or pulled to change the way something looks such that you see things that weren’t visible to the naked eye, then we call it manipulation, but generally people aren’t accused of “photoshopping” something just by raising or lowering the brightness a little, right?

When I started reading the article, my first thought was, ‘there’s no such thing as an unprocessed photo that you can see’. Sensor readings can’t be looked at without making choices about how to expose them, without choosing a mapping or transfer function. That’s not to mention that they come with physical response curves that the author went out of his way to sort-of remove. The first few dark images in there are a sort of unnatural way to view images, but in fact they are just as processed as the final image, they’re simply processed differently. You can’t avoid “processing” a digital image if you want to see it, right? Measuring light with sensors involves response curves, transcoding to an image format involves response curves, and displaying on monitor or paper involves response curves, so any image has been processed a bunch by the time we see it, right? Does that count as “processing”? Technically, I think exposure processing is always built-in, but that kinda means exposing an image is natural and not some type of manipulation that changes the image. Ultimately it depends on what we mean by “processing”.

henrebotha 15 hours ago | parent [-]

It's like food: Virtually all food is "processed food" because all food requires some kind of process before you can eat it. Perhaps that process is "picking the fruit from the tree", or "peeling". But it's all processed in one way or another.

littlestymaar 14 hours ago | parent [-]

Hence the qualifier in “ultra-processed food”

NetMageSCW 7 hours ago | parent [-]

But that qualifier in stupid because there’s no start or stopping point for ultra processed versus all foods. Is cheese an ultra-processed food? Is wine?

Edman274 3 hours ago | parent | next [-]

There actually is a stopping point , and the definition of ultra processed food versus processed food is often drawn at the line where you can expect someone in their home kitchen to be able to do the processing. So, the question kind of goes whether or not you would expect someone to be able to make cheese or wine at home. I think there you would find it natural to conclude that there's a difference between a Cheeto, which can only be created in a factory with a secret extrusion process, versus cottage cheese, which can be created inside of a cottage. And you would probably also note that there is a difference between American cheese which requires a process that results in a Nile Red upload, and cheddar cheese which still could be done at home, over the course of months like how people make soap at home. You can tell that wine can be made at home because people make it in jails. I have found that a lot of people on Hackernews have a tendency to flatten distinctions into a binary, and then attack the binary as if distinctions don't matter. This is another such example.

littlestymaar 3 hours ago | parent | prev [-]

With that kind of reasoning you can't name anything, ever. For instance, what's computer? Is a credit card a computer.

jjbinx007 21 hours ago | parent | prev | next [-]

Equally bad is the massive over sharpening applied to CCTV and dash cams. I tried to buy a dash cam a year ago that didn't have over sharpened images but it proved impossible.

Reading reg plates would be a lot easier if I could sharpen the image myself rather than try to battle with the "turn it up to 11" approach by manufacturers.

kccqzy 8 hours ago | parent | prev | next [-]

> Smartphone and dedicated digital still cameras aren't as drastic, but when zoomed in, or in low light, faces have a "painted" kind of look.

My theory is that this is trying to do denoising after capturing the image with a high ISO. I personally hate that look.

On my dedicated still camera I almost always set ISO to be very low (ISO 100) and only shoot people when lighting is sufficient. Low light is challenging and I’d prefer not to deal with it when shooting people, unless making everything dark is part of the artistic effect I seek.

On the other hand on my smartphone I just don’t care that much. It’s mostly for capturing memories in situations where bringing a dedicated camera is impossible.

kqr 11 hours ago | parent | prev | next [-]

> But does applying the same transfer function to each pixel (of a given colour anyway) count as "processing"?

In some sense it has to, because you can include a parametric mask it that function which makes it possible to perform local edits with global functions.

Gibbon1 17 hours ago | parent | prev | next [-]

Was mentioning to my GF (non technical animator) about the submission Clock synchronization is a nightmare. And how it comes up like a bad penny. She said in animation you have the problem that you're animating to match different streams and you have to keep in sync. Bonus you have to dither because if you match too close the players can smell it's off.

Noise is part of the world itself.

eru 21 hours ago | parent | prev [-]

Just wait a few years, all of this is still getting better.

coldtea 20 hours ago | parent | next [-]

It's not really - it's going in the inverse direction regarding how much more processed and artificially altered it gets.

trinix912 13 hours ago | parent | prev | next [-]

Except it seems to be going in the opposite direction, every phone I've upgraded (various Androids and iPhones) seemed to have more smoothing than the one I'd had before. My iPhone 16 night photos look like borderline generative AI and there's no way to turn that off!

I was honestly happier with the technically inferior iPhone 5 camera, the photos at least didn't look fake.

vbezhenar 11 hours ago | parent [-]

If you can get raw image data from the sensor, then there will be apps to produce images without AI processing. Ordinary people love AI enhancements, so built-in apps are optimised for this approach, but as long as underlying data is accessible, there will be third-party apps that you can use.

trinix912 8 hours ago | parent [-]

That's a big IF. There's ProRaw but for that you need an iPhone Pro, some Androids have RAW too but it's huge and lacks even the most basic processing resulting in photos that look like one of the non-final steps in the post.

Third party apps are hit or miss, you pay for one only to figure out it doesn't actually get the raw output on your model and so on.

There's very little excuse for phone manufacturers to not put a toggle to disable excessive post-processing. Even iOS had an HDR toggle but they've removed it since.

MarkusWandel 21 hours ago | parent | prev [-]

"Better"...

DonHopkins 21 hours ago | parent [-]

"AIer"... Who even needs a lens or CCD any more?

Artist develops a camera that takes AI-generated images based on your location. Paragraphica generates pictures based on the weather, date, and other information.

https://www.standard.co.uk/news/tech/ai-camera-images-paragr...

RestartKernel 18 hours ago | parent [-]

Thanks for the link, that's a very interesting statement piece. There must be some word though for the artistic illiteracy in those X/Twitter replies.