Remix.run Logo
bayindirh 6 days ago

I'll kindly disagree with you. Like the other commenter, I'm on a 27" HP business monitor comes with color calibration certificate, and the differences are very visible. Moreover, I'm taking photos as a hobby for some time.

The angle, different focal lengths doesn't matter in rendering of the images. The issue is, cameras on phones are not for taking a photo of what you see, but a way to share your life, and sharing your life in a more glamorous way is to get liked around people. Moreover, we want to be liked as human beings, it's in our nature.

So, phone companies driven by both smaller sensors (that thing is way noisier when compared to a full frame sensor) and market pressure to reduce processing needed to be done by end users (because it inconveniences them), started to add more and more complicated post-processing in their cameras.

The result is this very article. People with their natural complications reduced, skin tones boosted on red parts, sharpened but flatter photos, without much perspective correction and sometimes looking very artificial.

Make no mistake, "professional" cameras also post process, but you can both see this processing and turn it off if you want, and the professional cameras corrects what lens fails at, but smartphones, incl. iPhone makes "happy, social media ready" photos by default.

As, again other commenter said, it's not a limitation of the sensor (sans the noise). Sony supplies most of the higher end sensors in the market, and their cameras or other cameras sporting sensors produced by them got the "best color" awards over and over again, and XPeria smartphones comes with professional camera pipelines after that small sensor, so they can take photos like what you see.

I personally prefer iPhone as my smartphone of my choice, but the moment I want to take a photo I want to spend time composing, I ditch default camera app and use Halide, because that thing can bypass Apple's post-processing, and even can apply none if you want.

lonelyasacloud 6 days ago | parent | next [-]

> The issue is, cameras on phones are not for taking a photo of what you see, but a way to share your life, and sharing your life in a more glamorous way is to get liked around people.

Is nothing new.

When film was mass market almost no one developed their own photos (particularly colo(u)r). Instead almost all printing went through bulk labs who optimised for what people wanting to show to their family and friends.

What is different now is if someone cares about post processing to try and present their particular version of reality they can do it easily without the cost and inconvenience of having to setup and run a darkroom.

bayindirh 6 days ago | parent [-]

Personally coming from the film era, I don't think it's as clear cut as this.

Many of the post-processing an informed person does on a digital photo is an emulation of a process rooted in a darkroom, yes.

On the other hand, some of the things cameras automatically does, e.g.: Skin color homogenization, selective object sharpening, body "aesthetic" enhancements, hallucinating the text which the lens can't resolve, etc. are not darkroom born methods, and they alter reality to the point of manipulation.

In film days, what we had as a run of the mill photographer was the selection of the film, and asking the lab "can you increase the saturation a bit, if possible". Even if you had your darkroom at home, you won't be able to selectively modify body proportions while keeping the details around untouched with the help of advanced image modification algorithms.

twoWhlsGud 6 days ago | parent | next [-]

Which is one reason why I often still shoot with an actual camera and sometimes even with film. I have a lifetime of experience with common film emulsions and a couple of decades of shooting with digital sensors with limited post processing.

When does that matter? It matters when I take pictures to remember what a moment was like. In particular, what the light was doing with the people or landscape at that point in time.

It's not so much that the familiar photographic workflows are more accurate, but they are more deterministic and I understand what they mean and how they filter those moments.

I still use my phone (easy has a quality of its own) but I find that it gives me a choice of either an opinionated workflow that overwhelms the actual moment (trying to make all moments the same moment) or a complex workflow that leaves me having to make the choices (and thus work) I do with a traditional camera but with much poorer starting material.

lonelyasacloud 6 days ago | parent | prev [-]

If a professional had access to darkroom facilities pretty much everything could be done in there right down to removing people and background objects (see for instance https://rarehistoricalphotos.com/stalin-photo-manipulation-1...).

It's just far easier for anyone to do now.

bayindirh 6 days ago | parent [-]

I know. You can remove people, use selective exposure with dodge and burn, etc.

But you can't change a person's proportions and skin tones so precisely unless you're printing small and have time, equipment and talent to paint the positive (negative would be nigh impossible) in a believable manner to do so.

On the otter hand, you can enable slimming filter on your phone or camera, click and you go.

Oh, also, some of the Soviet photo manipulation is done with stolen and translated French software, in 1987 [0].

[0]: https://tech.slashdot.org/story/10/11/04/1821236/soviet-imag...

tristor 6 days ago | parent | prev [-]

And this is why I have an Olympus Pen-F in one pocket and my iPhone in the other pocket. I love my iPhone, and I use it for taking snapshots day-to-day like receipts for my expense report, but any time I care about capturing something I see I have an actual camera in my pocket. Micro 4/3rds for size/weight, unfortunately, but while I have a FF camera I am not lugging it around with me everywhere, a Pen-F literally fits in my pocket with lens attached.