| ▲ | dheera a day ago |
| This is also why I absolute hate, hate, hate it when people ask me whether I "edited" a photo or whether a photo is "original", as if trying to explain away nice-looking images as if they are fake. The JPEGs cameras produce are heavily processed, and they are emphatically NOT "original". Taking manual control of that process to produce an alternative JPEG with different curves, mappings, calibrations, is not a crime. |
|
| ▲ | beezle 19 hours ago | parent | next [-] |
| As a mostly amateur photographer, it doesn't bother me if people ask that question. While I understand the point that the camera itself may be making some 'editing' type decision on the data first, a) in theory each camera maker has attempted to calibrate the output to some standard, b) public would expect two photos taken at same time with same model camera should look identical. That differs greatly from what often can happen in "post production" editing - you'll never find two that are identical. |
| |
| ▲ | vladvasiliu 12 hours ago | parent | next [-] | | > public would expect two photos taken at same time with same model camera should look identical But this is wrong. My not-too-exotic 9-year-old camera has a bunch of settings which affect the resulting image quite a bit. Without going into "picture styles", or "recipes", or whatever they're called these days, I can alter saturation, contrast, and white balance (I can even tell it to add a fixed alteration to the auto WB and tell it to "keep warm colors"). And all these settings will alter how the in-camera produced JPEG will look, no external editing required at all. So if two people are sitting in the same spot with the same camera, who's to say they both set them up identically? And if they didn't, which produces the "non-processed" one? I think the point is that the public doesn't really understand how these things work. Even without going to the lengths described by another commenter (local adjust so that there appears to be a ray of light in that particular spot, remove things, etc), just playing with the curves will make people think "it's processed". And what I described above is precisely what the camera itself does. So why is there a difference if I do it manually after the fact or if I tell the camera to do it for me? | |
| ▲ | integralid 16 hours ago | parent | prev [-] | | You and other responders to GP disagree with TFA: >There’s nothing that happens when you adjust the contrast or white balance in editing software that the camera hasn’t done under the hood. The edited image isn’t “faker” then the original: they are different renditions of the same data. |
|
|
| ▲ | gorgolo 14 hours ago | parent | prev | next [-] |
| I noticed this a lot when taking pictures in the mountains. I used to have a high resolution phone camera from a cheaper phone and then later switched to an iPhone. The latter produced much nicer pictures, my old phone just produces very flat-looking pictures. People say that the iPhone camera automatically edits the images to look better. And in a way I notice that too. But that’s the wrong way of looking at it; the more-edited picture from the iPhone actually corrresponds more to my perception when I’m actually looking at the scene. The white of the snow and glaciers and the deep blue sky really does look amazing in real life, and when my old phone captured it into a flat and disappointing looking photo with less postprocessing than an iPhone, it genuinely failed to capture what I can see with my eyes. And the more vibrant post processed colours of an iPhone really do look more like what I think I’m looking at. |
|
| ▲ | dsego 13 hours ago | parent | prev | next [-] |
| I don't think it's the same, for me personally I don't like heavily processed images. But not in the sense that they need processing to look decent or to convey the perception of what it was like in real life, more in the sense that the edits change the reality in a significant way so it affects the mood and the experience. For example, you take a photo on a drab cloudy day, but then edit the white balance to make it seem like golden hour, or brighten a part to make it seems like a ray of light was hitting that spot. Adjusting the exposure, touching up slightly, that's all fine, depending on what you are trying to achieve of course. But what I see on instagram or shorts these days is people comparing their raws and edited photos, and without the edits the composition and subject would be just mediocre and uninteresting. |
| |
| ▲ | gorgolo 11 hours ago | parent | next [-] | | The “raw” and unedited photo can be just as or even more unrealistic than the edited one though. Photographs can drop a lot of the perspective, feeling and colour you experience when you’re there. When you take a picture of a slope on a mountain for example (on a ski piste for example), it always looks much less impressive and steep on a phone camera. Same with colours. You can be watching an amazing scene in the mountains, but when you take a photo with most cameras, the colours are more dull, and it just looks flatter. If a filter enhances it and makes it feel as vibrant as the real life view, I’d argue you are making it more realistic. The main message I get from OP’s post is precisely that there is no “real unfiltered / unedited image”, you’re always imperfectly capturing something your eyes see, but with a different balance of colours, different detector sensitivity to a real eye etc… and some degree of postprocessing is always required make it match what you see in real life. | |
| ▲ | foldr 11 hours ago | parent | prev [-] | | This is nothing new. For example, Ansel Adams’s famous Moonrise, Hernandez photo required extensive darkroom manipulations to achieve the intended effect: https://www.winecountry.camera/blog/2021/11/1/moonrise-80-ye... Most great photos have mediocre and uninteresting subjects. It’s all in the decisions the photographer makes about how to render the final image. |
|
|
| ▲ | to11mtm a day ago | parent | prev | next [-] |
| JPEG with OOC processing is different from JPEG OOPC (out-of-phone-camera) processing. Thank Samsung for forcing the need to differentiate. |
| |
| ▲ | seba_dos1 a day ago | parent [-] | | I wrote the raw Bayer to JPEG pipeline used by the phone I write this comment on. The choices on how to interpret the data are mine. Can I tweak these afterwards? :) | | |
| ▲ | Uncorrelated 15 hours ago | parent | next [-] | | I found the article you wrote on processing Librem 5 photos: https://puri.sm/posts/librem-5-photo-processing-tutorial/ Which is a pleasant read, and I like the pictures. Has the Librem 5's automatic JPEG output improved since you wrote the post about photography in Croatia (https://dosowisko.net/l5/photos/)? | | |
| ▲ | seba_dos1 11 hours ago | parent [-] | | Yes, these are quite old. I've written a GLSL shader that acts as a simple ISP capable of real-time video processing and described it in detail here: https://source.puri.sm/-/snippets/1223 It's still pretty basic compared to hardware accelerated state-of-the-art, but I think it produces decent output in a fraction of a second on the device itself, which isn't exactly a powerhouse: https://social.librem.one/@dos/115091388610379313 Before that, I had an app for offline processing that was calling darktable-cli on the phone, but it took about 30 seconds to process a single photo with it :) |
| |
| ▲ | to11mtm a day ago | parent | prev [-] | | I mean it depends, does your Bayer-to-JPEG pipeline try to detect things like 'this is a zoomed in picture of the moon' and then do auto-fixup to put a perfect moon image there? That's why there's some need to differentiate between SOOC's now, because Samsung did that. I know my Sony gear can't call out to AI because the WIFI sucks like every other Sony product and barely works inside my house, but also I know the first ILC manufacturer that tries to put AI right into RAW files is probably the first to leave part of the photography market. That said I'm a purist to the point where I always offer RAWs for my work [0] and don't do any photoshop/etc. D/A, horizon, bright adjust/crop to taste. Where phones can possibly do better is the smaller size and true MP structure of a cell phone camera sensor, makes it easier to handle things like motion blur. and rolling shutter. But, I have yet to see anything that gets closer to an ILC for true quality than the decade+ old pureview cameras on Nokia cameras, probably partially because they often had sensors large enough. There's only so much computation can do to simulate true physics. [0] - I've found people -like- that. TBH, it helps that I tend to work cheap or for barter type jobs in that scene, however it winds up being something where I've gotten repeat work because they found me and a 'photoshop person' was cheaper than getting an AIO pro. |
|
|
|
| ▲ | fc417fc802 20 hours ago | parent | prev | next [-] |
| There's a difference between an unbiased (roughly speaking) pipeline and what (for example) JBIG2 did. The latter counts as "editing" and "fake" as far as I'm concerned. It may not be a crime but at least personally I think it's inherently dishonest to attempt to play such things off as "original". And then there's all the nonsense BigTech enables out of the box today with automated AI touch ups. That definitely qualifies as fakery although the end result may be visually pleasing and some people might find it desirable. |
|
| ▲ | make3 21 hours ago | parent | prev [-] |
| it's not a crime but applying post processing in an overly generous way that goes a lot further than replicating what a human sees does take away from what makes pictures interesting imho vs other mediums, that it's a genuine representation of something that actually happened. if you take that away, a picture is not very interesting, it's hyperrealistic so not super creative a lot of the time (compared to eg paintings), & it doesn't even require the mastery of other mediums to get hyperrealistism |
| |
| ▲ | Eisenstein 21 hours ago | parent [-] | | Do you also want the IR light to be in there? That would make it more of 'genuine representation'. | | |
| ▲ | BenjiWiebe 21 hours ago | parent | next [-] | | Wouldn't be a genuine version of what my eyes would've seen, had I been the one looking instead of the camera. I can't see infrared. | | |
| ▲ | ssl-3 19 hours ago | parent | next [-] | | Perhaps interestingly, many/most digital cameras are sensitive to IR and can record, for example, the LEDs of an infrared TV remote. But they don't see it as IR. Instead, this infrared information just kind of irrevocably leaks into the RGB channels that we do perceive. With the unmodified camera on my Samsung phone, IR shows up kind of purple-ish. Which is... well... it's fake. Making invisible IR into visible purple is an artificially-produced artifact of the process that results in me being able to see things that are normally ~impossible for me to observe with my eyeballs. When you generate your own "genuine" images using your digital camera(s), do you use an external IR filter? Or are you satisfied with knowing that the results are fake? | | |
| ▲ | lefra 15 hours ago | parent [-] | | Silicon sensors (which is what you'll get in all visible-light cameras as far as I know) are all very sensitive to near-IR. Their peak sensitivity is around 900nm. The difference between cameras that can see or not see IR is the quality of their anti-IR filter. Your Samsung phone probably has the green filter of its bayer matrix that blocks IR better than the blue and red ones. Here's a random spectral sensitivity for a silicon sensor: https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRkffHX... |
| |
| ▲ | Eisenstein 20 hours ago | parent | prev [-] | | But the camera is trying to emulate how it would look if your eyes were seeing it. In order for it to be 'genuine' you would need not only the camera to genuine, but also the OS, the video driver, the viewing app, the display and the image format/compression. They all do things to the image that are not genuine. |
| |
| ▲ | make3 17 hours ago | parent | prev [-] | | "of what I would've seen" |
|
|