Last week, The New Yorker published an article by Kyle Chayka complaining about how the iPhone camera’s use of machine learning is creating bad photos: “Have iPhone Cameras Become Too Smart?” (March 18, 2022). Although he makes some good points, the whole thing smacked of that nostalgia-based fear mongering common in the photography world that boils down to: new technologies are ruining photography, and only the old cameras/techniques/visions are the right ones.
It drives me crazy. I was going to cover another topic for my latest Smarter Image column, but I felt compelled to point out a giant hole that Chayka either missed or deliberately omitted: using computational photography features is a choice. Don’t like how the default iPhone photos look? Switch to Halide or another app that can shoot using manual controls and/or save the images in raw format for editing later.
Granted, and this is something I mention in the column, most people probably don’t know they have a choice, because it’s so easy to use the built-in Camera app. (This applies to other phones, too, not just the iPhone.) But most people also won’t notice the difference, or they’re getting much better photos than they would otherwise.
Read the column here and let me know what you think: Outsmart your iPhone camera’s overzealous AI.
For other reactions to the article, Michael Tsai has been collecting them: iPhone Cameras and Computational Photography.
🤖 Jeff