Remix.run Logo
userbinator a day ago

I think everyone agrees that dynamic range compression and de-Bayering (for sensors which are colour-filtered) are necessary for digital photography, but at the other end of the spectrum is "use AI to recognise objects and hallucinate what they 'should' look like" --- and despite how everyone would probably say that isn't a real photo anymore, it seems manufacturers are pushing strongly in that direction, raising issues with things like admissibility of evidence.

stavros a day ago | parent | next [-]

One thing I've learned while dabbling in photography is that there are no "fake" images, because there are no "real" images. Everything is an interpretation of the data that the camera has to do, making a thousand choices along the way, as this post beautifully demonstrates.

A better discriminator might be global edits vs local edits, with local edits being things like retouching specific parts of the image to make desired changes, and one could argue that local edits are "more fake" than global edits, but it still depends on a thousand factors, most importantly intent.

"Fake" images are images with intent to deceive. By that definition, even an image that came straight out of the camera can be "fake" if it's showing something other than what it's purported to (e.g. a real photo of police violence but with a label saying it's in a different country is a fake photo).

What most people think when they say "fake", though, is a photo that has had filters applied, which makes zero sense. As the post shows, all photos have filters applied. We should get over that specific editing process, it's no more fake than anything else.

mmooss a day ago | parent | next [-]

> What most people think when they say "fake", though, is a photo that has had filters applied, which makes zero sense. As the post shows, all photos have filters applied.

Filters themselves don't make it fake, just like words themselves don't make something a lie. How the filters and words are used, whether they bring us closer or further from some truth, is what makes the difference.

Photos implicitly convey, usually, 'this is what you would see if you were there'. Obviously filters can help with that, as in the OP, or hurt.

bborud 8 hours ago | parent | prev | next [-]

I agree global vs local edits as a discriminator, but there is a bit of a sliding scale here. For instance when you edit a photo of something that is lit by multiple light sources that have different color temperatures. The photo will produce a representation that shows more dramatic differences than you might be aware of when you look at the same scene. So when editing a photo, you may apply some processing to different areas to nudge the colors closer to how you’d “see” them.

Ditto for black and white photos. Your visual perception has pretty high dynamic range. Not least because your eyes move and your brain creates a representation that gives you the illusion of higher dynamic range than what your eyes can actually deliver. So when you want to represent it using a technology that can only give you a fraction of the dynamic range you (or your camera) can see, you sometimes make local’ish edits (eg create a mask with brush or gradients to lighten or darken regions)

Ansel Adams did a lot of dodging and burning in his prints. Some of the more famous ones are very obvious in terms of having been “processed” during the exposure of the print.

I see this as overcoming the limitations in conveying what your eyes/brain will see when using the limited capabilities of camera/screen/print. It is local’ish edits, but the intent isn’t so much to deceive as it is to nudge information into a range where it can be seen/understood.

xgulfie a day ago | parent | prev | next [-]

There's an obvious difference between debayering and white balance vs using Photoshop's generative fill

sho_hn a day ago | parent [-]

Pretending that "these two things are the same, actually" when in fact no, you can seperately name and describe them quite clearly, is a favorite pastime of vacuous content on the internet.

Artists, who use these tools with clear vision and intent to achieve specific goals, strangely never have this problem.

tpmoney 16 hours ago | parent [-]

But that was the point the OP was making. Not that you couldn’t differentiate between white balance correction and generative fill, but rather that the intent of the change matters for determining if an image is “fake”.

For example, I took a picture of my dog at the dog park the other day. I didn’t notice when framing the picture but on review at home, right smack in the middle of the lower 3rd of the photo and conveniently positioned to have your eyes led there by my dog’s pose and snout direction, was a giant, old, crusty turd. Once you noticed it, it was very hard to not see it anymore. So I broke out the photo editing tools and used some auto retouching tool to remove the turd. And lucky for me since the ground was mulch, the tool did a fantastic job of blending it out, and if I didn’t tell you it had been retouched, you wouldn’t know.

Is that a fake image? The subject of the photo was my dog. The purpose of the photo was to capture my dog doing something entertaining. When I was watching the scene with my own human eyes I didn’t see the turd. Nor was capturing the turd in the photo intended or essential to capturing what I wanted to capture. But I did use some generative tool (algorithmic or AI I couldn’t say) to convincingly replace the turd with more mulch. So does doing that make the image fake? I would argue no. If you ask me what the photo is, I say it’s a photo of my dog. The edit does not change my dog, nor change the surrounding to make the dog appear somewhere else or to make the dog appear to be doing something they weren’t doing were you there to witness it yourself. I do not intend the photo to be used as a demonstration of how clean that particular dog park is or was on that day, or even to be a photo representing that dog park at all. My dog happened to be in that locale when they did something I wanted a picture of. So to me that picture is no more fake than any other picture in my library. But a pure “differentiate on the tools” analysis says it is a fake image, content that wasn’t captured by the sensor is now in the image and content that was captured no longer is. Fake image then right?

I think the OP has it right, the intent of your use of the tool (and its effect) matters more than what specific tool you used.

bondarchuk 2 hours ago | parent | next [-]

Everyone knows what is meant by a real vs fake digital photo, it is made abundantly clear by the mentions of debayering and white balance/contrast as "real" and generative fill as "fake". You and some others here are just shifting the conversation to a different kind of "fake". A whole load of semantic bickering for absolutely nothing.

card_zero 14 hours ago | parent | prev [-]

I don't know, removing the turd from that picture reminds me of when Stalin had the head of the NKVD (deceased) removed from photos after the purge. It sounds like the turd was probably the focus of all your dog's attention and interest at the time, and editing it out has created a misleading situation in a way that would be outrageous if I was a dog and capable of outrage.

teeray a day ago | parent | prev | next [-]

> "Fake" images are images with intent to deceive

The ones that make the annual rounds up here in New England are those foliage photos with saturation jacked. “Look at how amazing it was!” They’re easy to spot since doing that usually wildly blows out the blues in the photo unless you know enough to selectively pull those back.

mr_toad 20 hours ago | parent | next [-]

Often I find photos rather dull compared to what I recall. Unless the lighting is perfect it’s easy to end up with a poor image. On the other hand the images used in travel websites are laughably over processed.

dheera a day ago | parent | prev [-]

Photography is also an art. When painters jack up saturations in their choices of paint colors people don't bat an eyelid. There's no good reason photographers cannot take that liberty as well, and tone mapping choices is in fact a big part of photographers' expressive medium.

If you want reality, go there in person and stop looking at photos. Viewing imagery is a fundamentally different type of experience.

zmgsabst a day ago | parent [-]

Sure — but people reasonably distinguish between photos and digital art, with “photo” used to denote the intent to accurately convey rather than artistic expression.

We’ve had similar debates about art using miniatures and lens distortions versus photos since photography was invented — and digital editing fell on the lens trick and miniature side of the issue.

dheera a day ago | parent [-]

Journalistic/event photography is about accuracy to reality, almost all other types of photography are not.

Portrait photography -- no, people don't look like that in real life with skin flaws edited out

Landscape photography -- no, the landscapes don't look like that 99% of the time, the photographer picks the 1% of the time when it looks surreal

Staged photography -- no, it didn't really happen

Street photography -- a lot of it is staged spontaneously

Product photography -- no, they don't look like that in normal lighting

switchbak 20 hours ago | parent | next [-]

This is a longstanding debate in landscape photography communities - virtually everyone edits, but there’s real debate as to what the line is and what is too much. There does seem to be an idea of being faithful to the original experience, which I subscribe to, but that’s certainly not universal.

NetMageSCW 3 hours ago | parent | prev | next [-]

Nothing can be staged spontaneously.

BenjiWiebe 21 hours ago | parent | prev [-]

Re landscape photography: If it actually looked like that in person 1 percent of the time, I'd argue it's still accurate to reality.

dheera 19 hours ago | parent [-]

There are a whole lot of landscape photographs out there I can vouch for their realism 1% of the time because I do a lot of landscape photography myself and tend to get out at dawn and dusk a lot. There are lots of shots I got where the sky looked a certain way for a grand total of 2 minutes before sunrise, and I can see similar lighting in other peoples' shots as real.

A lot of armchair critics on the internet who only go out to their local park at high noon will say they look fake but they're not.

There are other elements I can spot realism where the armchair critic will call it a "bad photoshop". For example, a moon close to the horizon usually looks jagged and squashed due to atmospheric effects. That's the sign of a real moon. If it looks perfectly round and white at the horizon, I would call it a fake.

userbinator a day ago | parent | prev | next [-]

Everything is an interpretation of the data that the camera has to do

What about this? https://news.ycombinator.com/item?id=35107601

mrandish a day ago | parent | next [-]

News agencies like AP have already come up with technical standards and guidelines to technically define 'acceptable' types and degrees of image processing applied to professional photo-journalism.

You can look it up because it's published on the web but IIRC it's generally what you'd expect. It's okay to do whole-image processing where all pixels have the same algorithm applied like the basic brightness, contrast, color, tint, gamma, levels, cropping, scaling, etc filters that have been standard for decades. The usual debayering and color space conversions are also fine. Selectively removing, adding or changing only some pixels or objects is generally not okay for journalistic purposes. Obviously, per-object AI enhancement of the type many mobile phones and social media apps apply by default don't meet such standards.

mgraczyk a day ago | parent | prev | next [-]

I think Samsung was doing what was alleged, but as somebody who was working on state of the art algorithms for camera processing at a competitor while this was happening, this experiment does not prove what is alleged. Gaussian blurring does not remove the information, you can deconvolve and it's possible that Samsung's pre-ML super resolution was essentially the same as inverting a gaussian convolution

userbinator a day ago | parent [-]

If you read the original source article, you'll find this important line:

I downsized it to 170x170 pixels

mgraczyk a day ago | parent [-]

And? What algorithm was used for downsampling? What was the high frequency content of the downsampled imagine after doing a psuedo inverse with upsampling? How closely does it match the Samsung output?

My point is that there IS an experiment which would show that Samsung is doing some nonstandard processing likely involving replacement. The evidence provided is insufficient to show that

Dylan16807 21 hours ago | parent | next [-]

You can upscale a 170x170 image yourself, if you're not familiar with what that looks like. The only high frequency details you have after upscaling are artifacts. This thing pulled real details out of nowhere.

mgraczyk 21 hours ago | parent [-]

That is not true

For example see

https://en.wikipedia.org/wiki/Edge_enhancement

Dylan16807 21 hours ago | parent [-]

That example isn't doing any scaling.

You can try to guess the location of edges to enhance them after upscaling, but it's guessing, and when the source has the detail level of a 170x170 moon photo a big proportion of the guessing will inevitably be wrong.

And in this case it would take a pretty amazing unblur to even get to the point it can start looking for those edges.

mgraczyk 21 hours ago | parent [-]

You're mistaken and the original experiment does not distinguish between classic edge aware upscaling/super resolution vs more problematic replacement

Dylan16807 21 hours ago | parent [-]

I'm mistaken about which part? Let's start here:

You did not link an example of upscaling, the before and after are the same size.

Unsharp filters enhance false edges on almost all images.

If you claim either one of those are wrong, you're being ridiculous.

mgraczyk 17 hours ago | parent [-]

I think if you paste our conversation into ChatGPT it can explain the relevant upsampling algorithms. There are algorithms that will artificially enhance edges in a way that can look like "AI", for example everything done on pixel phones prior to ~2023

And to be clear, everyone including Apple has been doing this since at least 2017

The problem with what Samsung was doing is that it was moon-specific detection and replacement

userbinator 20 hours ago | parent | prev [-]

You have clearly made no attempts to read the original article which has a lot more evidence (or are actively avoiding it), and somehow seem to be defending Samsung voraciously but emptily, so you're not worth arguing with and I'll just leave this here:

I zoomed in on the monitor showing that image and, guess what, again you see slapped on detail, even in the parts I explicitly clipped (made completely 100% white):

mgraczyk 17 hours ago | parent [-]

> somehow seem to be defending Samsung voraciously but emptily

The first words I said were that Samsung probably did this

And you're right that I didn't read the dozens of edits which were added after the original post. I was basing my arguments off everything before the "conclusion section", which it seems the author understands was not actually conclusive.

I agree that the later experiments, particularly the "two moons" experiment were decisive.

Also to be clear, I know that Samsung was doing this, because as I said I worked at a competitor. At the time I did my own tests on Samsung devices because I was also working on moon related image quality

the_af 7 hours ago | parent | prev [-]

Wow.

From one of the comments there:

> When people take a picture on the moon, they want a cool looking picture of the moon, and every time I have take a picture of the moon, on what is a couple of year old phone which had the best camera set up at the time, it looks awful, because the dynamic range and zoom level required is just not at all what smart phones are good at.

> Hence they solved the problem and gave you your picture of the moon. Which is what you wanted, not a scientifically accurate representation of the light being hit by the camera sensor. We had that, it is called 2010.

Where does one draw the line though? This is a kind of lying, regardless of the whole discussion about filters and photos always being an interpretation of raw sensor data and whatnot.

Again, where does one draw the line? The person taking a snapshot of the moon expects a correlation between the data captured by the sensor and whatever they end up showing their friends. What if the camera only acknowledged "ok, this user is trying to photograph the moon" and replaced ALL of the sensor data with a library image of the moon it has stored in its memory? Would this be authentic or fake? It's certainly A photo of the moon, just not a photo taken with the current camera. But the user believes it's taken with their camera.

I think this is lying.

to11mtm a day ago | parent | prev | next [-]

Well that's why back in the day (and even still) 'Photographer listing their whole kit for every shot' is a thing thing you sometimes see.

i.e. Camera+Lens+ISO+SS+FStop+FL+TC (If present)+Filter (If present). Add focus distance if being super duper proper.

And some of that is to help at least provide the right requirements to try to recreate.

rozab 10 hours ago | parent | prev | next [-]

Another low tech example - those telephoto crowd shots that were popular during covid. The 'deception' happens before the light hits the sensor, but it's no less effective

https://www.theguardian.com/australia-news/2020/sep/13/pictu...

bandrami 16 hours ago | parent | prev | next [-]

A boss once asked me "is there a way to tell if an image has been Photoshopped?" and I did eventually get him to "yes, if you can see the image it has been digitally processed and altered by that processing". (The brand-name-as-generic conversation was saved for another day.)

badc0ffee 5 hours ago | parent [-]

> (The brand-name-as-generic conversation was saved for another day.)

Maybe don't bring that up, unless you want your boss to think you're a tedious blowhard.

mcdeltat 21 hours ago | parent | prev | next [-]

Eh, I'm a photographer and I don't fully agree. Of course almost all photos these days are edited in some form. Intent is important, yes. But there are still some kinds of edits that immediately classify a photo as "fake" for me.

For example if you add snow to a shot with masking or generative AI. It's fake because the real life experience was not actually snowing. You can't just hallucinate a major part of the image - that counts as fake to me. A major departure from the reality of the scene. Many other types of edits don't have this property because they are mostly based on the reality of what occurred.

I think for me this comes from an intrinsic valuing of the act/craft of photography, in the physical sense. Once an image is too digitally manipulated then it's less photography and more digital art.

nospice a day ago | parent | prev | next [-]

> A better discriminator might be global edits vs local edits,

Even that isn't all that clear-cut. Is noise removal a local edit? It only touches some pixels, but obviously, that's a silly take.

Is automated dust removal still global? The same idea, just a bit more selective. If we let it slide, what about automated skin blemish removal? Depth map + relighting, de-hazing, or fake bokeh? I think that modern image processing techniques really blur the distinction here because many edits that would previously need to be done selectively by hand are now a "global" filter that's a single keypress away.

Intent is the defining factor, as you note, but intent is... often hazy. If you dial down the exposure to make the photo more dramatic / more sinister, you're manipulating emotions too. Yet, that kind of editing is perfectly OK in photojournalism. Adding or removing elements for dramatic effect? Not so much.

card_zero a day ago | parent [-]

What's this, special pleading for doctored photos?

The only process in the article that involves nearby pixels is to combine R G and B (and other G) into one screen pixel. (In principle these could be mapped to subpixels.) Everything fancier than that can be reasonably called some fake cosmetic bullshit.

seba_dos1 a day ago | parent | next [-]

The article doesn't even go anywhere near what you need to do in order to get an acceptable output. It only shows the absolute basics. If you apply only those to a photo from a phone camera, it will be massively distorted (the effect is smaller, but still present on big cameras).

cellular 8 hours ago | parent | next [-]

When i worked on image pipeline the images were circular and had to be warped to square. Also the edges of the circular image were darker than the middle, and needed to be brightened.

card_zero a day ago | parent | prev [-]

"Distorted" makes me think of a fisheye effect or something similar. Unsure if that's what you meant.

seba_dos1 a day ago | parent [-]

That's just one kind of distortion you'll see. There will also be bad pixels, lens shading, excessive noise in low light, various electrical differences across rows and temperatures that need to be compensated... Some (most?) sensors will even correct some of these for you already before handing you "raw" data.

Raw formats usually carry "Bayer-filtered linear (well, almost linear) light in device-specific color space", not necessarily "raw unprocessed readings from the sensor array", although some vendors move it slightly more towards the latter than others.

Toutouxc 14 hours ago | parent | prev | next [-]

In that case you can't reasonably do digital photography without "fake cosmetic bullshit" and no current digital camera will output anything even remotely close to no fake cosmetic bullshit.

card_zero 14 hours ago | parent [-]

That sounds likely. I wonder what specific filters can't be turned off, though. I think you can usually turn off sharpening. Maybe noise removal is built-in somehow (I think somebody else said it's in the sensor).

Toutouxc 13 hours ago | parent [-]

I think you’ll find that there is no clear line between what you call fake bullshit and the rest of the process. The entire signal path is optimized at every step to reduce and suppress noise. There’s actual light noise, there’s readout noise, ADC noise, often dozens or hundreds of abnormal pixels. Certain autofocus technologies even sacrifice image-producing pixels, and simply interpolate over the “holes” in data.

Regarding sharpening and optical stuff, many modern camera lenses are built with the expectation that some of their optical properties will be easy to correct for in software, allowing the manufacturer to optimize for other properties.

nospice a day ago | parent | prev [-]

I honestly don't understand what you're saying here.

card_zero a day ago | parent [-]

I can't see how to rephrase it. How about this:

Removing dust and blemishes entails looking at more than one pixel at a time.

Nothing in the basic processing described in the article does that.

melagonster a day ago | parent | prev | next [-]

Today, I trust the other meaning of "fake images" is that an image was generated by AI.

kortilla a day ago | parent | prev | next [-]

But when you shift the goal posts that far, a real image has never been produced. But people very clearly want to describe when an image has been modified to represent something that didn’t happen.

imiric a day ago | parent | prev | next [-]

I understand what you and the article are saying, but what GP is getting at, and what I agree with, is that there is a difference between a photo that attempts to reproduce what the "average" human sees, and digital processing that augments the image in ways that no human could possibly visualize. Sometimes we create "fake" images to improve clarity, detail, etc., but that's still less "fake" than smoothing skin to remove blemishes, or removing background objects. One is clearly a closer approximation of how we perceive reality than the other.

So there are levels of image processing, and it would be wrong to dump them all in the same category.

the_af 7 hours ago | parent | prev [-]

Sidestepping the whole discussion about "fake" and "real" images, I think what matters is what degree of correlation is there, if any, between the raw sensor data and the final photo you show your friends.

Raw data requires interpretation, no argument there.

But when AI starts making stuff up out of nowhere, it becomes a problem. Again, some degree of making up stuff is ok, but AI often crosses the line. When it diverges enough from what was captured by the sensor, it crosses firmly into "made up" territory.

grishka 11 hours ago | parent | prev | next [-]

For me personally, it's fine to do things like local tone mapping, but object segmentation is where I draw the line. As in, a camera shouldn't know what a sky or a tree or a person is. It shouldn't care about any of that. It shouldn't process different parts of the image differently depending on what it "thinks" is there. Also, denoising should be configurable, because I would always prefer noise over this stupid "painted" look.

liampulles 14 hours ago | parent | prev [-]

ML demosaicing algorithms (e.g. convolutional neural networks) are the state of the art for reversing camera color filters, and this was true back when I did my post-grad studies on the subject almost 10 years ago, not to mention all the other stages of the post-processing stack. So one will have to wrestle with the fact that some form of "AI" has been part of digital images for a while now.

I mean to some degree, human perception is a hallucination of reality. It is well known by magicians that if you know the small region of space that a person is focusing on, then you can totally change other areas of the scene without the person noticing.