Hello! Apologies for the gap between the last newsletter and this one; this autumn has been a season of deadlines, a bit of chasing fall color, and more deadlines. I feel as if October blasted by too quickly, and here we are in November already. On the other hand, the trees are still mostly orange here in Seattle, and it’s soup season, so that’s all good. (Autumn is my favorite season if you couldn’t tell.)
First Things
Let’s start with some quick housekeeping. You should see the official name of this newsletter now as The Smarter Image, both online and in the Substack app if you use it. I wasn’t crazy about sharing the name Photo AI with Topaz’s product, even though my usage came first, and honestly I prefer The Smarter Image. That also means that thesmarterimage.com is the best way to access previous posts (which resolves to thesmarterimage.substack.com).
At some point I will also migrate away from Substack to another provider, because I’m not a big fan of the Substack management. I haven’t decided which platform to move to yet (see above about being busy), but that shouldn’t matter from your perspective, since subscribers, even my paid supporters, should transition smoothly. As the saying goes, we’ll burn that bridge when we get to it.
The most important thing is that I appreciate your interest and support, whether that’s through paid subscriptions or simply subscribing. Thank you.
Dealing with the Firehose
I am, admittedly, hiding a bit behind “being busy” regarding the time it’s taken to write a new post. The truth is, AI and ML have turned into a firehose of news and information. What I thought would be this great little niche is now the main focus of many companies’ efforts. It feels like there’s just too much to process.
I do recognize that dealing with the firehose is precisely the point of a newsletter like this. (Writer, edit thyself.) But even so, I’ve had a hard time picking and choosing between the many notes and links I add to my growing list of Smarter Image topics and research pointers.
What I don’t think is as useful is a constant rundown of links and pointers. Perhaps that’s the easier approach, but I’m very much a “yes but what does it mean?” type of person. If I don’t have time to chase down every link, I’m sure you don’t either. So I’m not going to try to do everything.
I will continue to focus on practical aspects of AI/ML as it pertains to photography. And for that, this week, I want to point to Adobe. At Adobe MAX, the company’s yearly conference, “AI” was everywhere. The star of the show was Firefly Image 2, the next advancement of its generative technology, and how it’s improved over the original Firefly in less than a year. The results are better, though I think Adobe is still catching up to Midjourney and Dall-E in terms of fidelity. Firefly Image 2 is currently considered beta.
Firefly 1, though, is now out of beta and its creations can be used commercially. That’s important because the imagery created in Photoshop, Illustrator, and Adobe Express are generated from the first image model.
Perhaps the focus on AI/ML explains why updates to the Lightroom ecosystem of apps seemed modest this time around. Although to be fair, it could be due to comparing them to the seismic editing shift that ML-powered masking made last year. The Lightroom apps gained a new Lens Blur feature that simulates background blur using either depth map information captured by the camera (like the portrait mode in smartphones) or generating its own depth data based on what it understands of the image. I’ll look at this in more depth (ha) later. Also new is Point Color, which lets you select individual colors, not simply general hues, and manipulate them. Last of the big new features is support for editing in HDR Mode, which expands the brightness and dynamic range of photos with that data on screens capable of displaying it.
(Speaking of Lightroom, I’m working on a free addendum covering the features for my new book Adobe Lightroom: A Complete Course and Compendium of Features. I’ll let you know when the PDF is ready to download.)
Part of the joy of Adobe MAX, though, is the Sneaks portion that demonstrates some of the features Adobe is working on for future updates. For example, Project Stardust is an object-aware editing engine that allows you to move items around in the frame as if they were originally on their own layers. I also was impressed with Project SeeThrough, which uses machine learning to detect and remove glass glare. It’s fun to browse the whole set of #adobemax videos.
As much as I’m impressed with the ability to generate images from text prompts—and I appreciate that it’s a new art style of its own—incorporating these technologies into everyday workflows is going to have more impact for most photographers.
Further
Are you in the Seattle area? Come say hello in person! I’m doing a free presentation about the ML-based features in Lightroom on Saturday, November 18 at 10:00 AM at Kenmore Camera in Kenmore (a few minutes’ drive northeast of Seattle). For details and to register, go here: How Smart Is Lightroom? AI-Powered Photo Editing – LIVE w/ Jeff Carlson. I’m also doing a larger three-hour live class on Lightroom in January.
I’m no longer active on Twitter/X, and am finding Threads to be a good alternative, especially for photographers. I know, it’s a Meta/Zuckerberg production, and it’s inevitable that it will sink into the bilge pool of ads and crap that has claimed Instagram, but for now I’m enjoying it. Follow me at @jeffcarlson there. (I’m also active on Mastodon as @jeffcarlson@twit.social.)
Glass, one of my favorite photo-sharing sites, added AI powered search, and it works really well.
I wrote a lengthy review of Pixelmator Pro at DPReview: Pixelmator Pro 3.4 Camelot review: An all-purpose image editor for the Mac
Let’s Connect
Thanks again for reading and recommending The Smarter Image to others who would be interested. Send any questions, tips, or suggestions for what you’d like to see covered at jeff@jeffcarlson.com. Are these emails too long? Too short? Let me know.