When You Stare into Darkness... and the Darkness Ignores You
The frustration of AI/ML black boxes.
Hi again, and thanks for subscribing to Photo AI! If you’re reading this on the web, click here to subscribe for a free or paid subscription, either of which help me as an independent journalist and photographer. As a quick reminder, I’m on Mastodon at @jeffcarlson@twit.social, co-host the podcasts PhotoActive and Photocombobulate.
First Thing
This is a shorter newsletter this week due to several pressing deadlines. One of those projects revealed again a frustration with some AI/ML features and led to today’s main topic. Before that, though, more about the project:
This weekend I’m teaching a class (via Zoom) to the Naples Macfriends User Group (NMUG) about using Apple Photos to enjoy your photography. As with all of their classes (which include a great roster of instructors and subjects), you need to be a member of NMUG, and the class is an additional fee.
I’ve been lucky enough to present to NMUG several times over the years, including a class about managing files on the Mac in 2021. If you register for the class, you can attend live on Saturday, March 4, at 7:00 AM PST, or get access to the video later to watch on your own schedule. If you do sign up for the class and tune in live, say hi and bring your questions about the Photos app!
Black Box
When I was working on the organization part of my class, the topic of applying keywords inevitably came up. Nobody likes keywording, which is why I’m drawn to the promise of tools such as Excire Foto that generate keywords for you. They do this by analyzing each image and finding patterns that match scenes and objects that the machine learning models recognize. For instance, if a photo contains a sunflower, the image comes up in a search for “sunflowers” even if the term was never included as a keyword.
This came up in the last newsletter when I linked to the most recent episode of the PhotoActive podcast. (I know, I’ve got metadata on the brain lately.)
But what I want to draw attention to is the flip side of this cool technology. Apple Photos uses this same feature to help you navigate your image library, but it can be frustratingly inconsistent.
Case in point: I thought that searching for the Eiffel Tower would be a great example for my class. It’s a one-of-a-kind monument with clearly identifiable features that I would expect any modern ML algorithm would be able to pick out. So in the Photos app on my Mac, I typed “eiffel” into the Search field and got the following results:
It brought up photos captured on my iPhone at the Eiffel Tower based on the location information. One image was shot with a different camera but includes the keyword “Eiffel Tower.” And there are two screenshots that include the word “eiffel” in them. (That’s actually a super cool new feature where Photos can now read text in images.)
On the surface, it looks like a successful search. But I know I have more photos of the Eiffel Tower in my library. When I scroll back and locate the days I was in that area, sure enough, there are many more.
If the app were truly identifying objects, these would seem to be obvious and easy catches. However, none of them are tagged automatically.
That leads me to suspect that Apple’s scanning is looking for other things, such as common objects like “automobile” and “cat.” Or perhaps these particular photos just haven’t been scanned, despite being in my library for months. I attempted to snag them in other searches, such as “reflection,” “canal,” “river” (for the images in the set looking at the sunrise over the Seine), and “tower,” but was unsuccessful. I thought maybe it’s because they were captured with my Fujifilm camera and not the iPhone, but no, many of those are HEIC images shot with the iPhone 13 Pro.
And so we get to the heart of the problem. Apple is using machine learning to bypass (or more accurately, supplement) keywording, but there’s no way to tell what the software recognizes. And there’s also no mechanism to ask Photos to scan particular photos in order to goose it into considering the content of the shots.
I’m left with a search that doesn’t give me what I want, and having to fall back on my own memory of “I thought I took more photos of that thing” and then scrolling to find them.
Maybe this is just growing pains. Maybe I’m expecting too much.
Or perhaps we need to spend more time applying keywords after all.
Quick Links
John Oliver had a great segment about AI in general:
Let’s Talk
Thanks again for reading and recommending Photo AI to others who would be interested. Send any questions, tips, or suggestions for what you’d like to see covered at jeff@jeffcarlson.com. Are these emails too long? Too short? Let me know.