A funny thing happens when you’re busy: time passes quickly. Apologies for the delay since the last newsletter. The Big Project I finished needed some followup work, part of the regular process (I’ll be able to share more about it soon!), and there have been other projects and deadlines to meet. Not to be too frank (but of course I will), the more paid subscribers to this newsletter we get, the less I’ll need to take on a wide variety of other projects. Consider subscribing for as little as $4.17 per month. Thanks!
First Thing
Another funny thing happened over the last month: nothing. More specifically, DPReview has not shut down and is still publishing beyond the timeline of when Amazon was going to shutter it. I don’t know why, and the publication’s update doesn’t really say much, but it’s been great to see it continue from both a reader and writer standpoint. I was contacted by the editors about writing one of my pitches that were in the pipeline while they’ve remained online, which was published last week: Remove All the Things: Using modern software to erase pesky objects.
In it, I look at the various methods of removing unwanted items from photos in editing software, from dabbing out sensor smudges and blurry birds using healing tools to AI-based detection and removal tools such as the Remove tool in the public Photoshop beta and the one-touch Magic Eraser in Google Photos on the Pixel 6 and 7 phones. The article culminates in what we’re on the cusp of right now, using generative AI to rebuild areas where objects are removed.
Which leads me to…
Generative AI Officially Comes to Photoshop
It’s not often you hear me exclaim “holy f$%k” out loud in my office over a new software feature, but that’s what I just did. (Fortunately my wife wasn’t on a work call at the time.) Adobe today officially integrated generative AI technology into Photoshop, and I’m a little stunned.
I shouldn’t be: this is something we’ve known is coming to Photoshop and other apps, a point I made in the DPReview article above. But to use it in action is something else.
Here’s the gist: In the latest public beta of Photoshop, you can make a selection in any image and in the Contextual Task Bar (a new floating UI element that appears) click the Generative Fill button. That expands a text prompt where you can enter anything you’d like to see in the space you selected; or, leave the text blank and click Generate to remove the object.
I made a quick video of my first attempt at removing an area of a photo which you can watch here:
Colin Smith posted a great overview video that shows several features in action: Biggest thing to EVER happen in Photoshop!
Now that I’ve picked my jaw up off the floor, let’s talk about some details, because there’s a lot going on.
First, it’s noteworthy that Adobe incorporated the feature directly into Photoshop. It’s a core feature. And it’s also simple to use—no Discord servers or arcane commands to memorize. You can select an area of your image and either ask Photoshop to remove what’s there, filling in the missing pixels with AI-generated content, or specify something else to be in its place. (A friend suggested a dragon, so here you go:)
Second, and something I’m eager to learn more about, is that this type of image generation is expensive: not in the financial sense for you or me, but in the amount of computing resources required to make it happen (which on the back end is also financially expensive). And Adobe has just opened this capability up to any Creative Cloud subscription owner willing to install the beta. (To do that, go to the Creative Cloud app, select Beta apps in the sidebar, and install Photoshop (Beta) v24.6.)
But that’s not all, because as of today, Adobe Firefly is also open to everyone. So the demand on the server architecture behind it all must be ramping up dramatically. (See my previous writeup about Firefly.)
And there’s another consideration: from what I can tell, Photoshop is working on my full resolution image, and replacing areas at the same resolution. The other generative AI platforms, such as DALL-E, create low-resolution images, typically 1024x1024 or 2048x2048 pixels. The larger the generated image, the more resources are required to fill the space. So that further increases the computational load.
I highly encourage you to watch the episode of my Photocombobulate podcast where we talked to Mark Heaps about all of this, including the issue of compute capacity.
I’m looking forward to seeing what photographers and artists do with this technology, and it also opens the door wider to questions of how much of this pushes “photography” into the realm of “art.” Is Content-Aware Fill an acceptable way to remove items, but Gen AI isn’t? Let me know your thoughts: jeff@jeffcarlson.com.
Further
Capture One has finally added AI/ML tools to its popular editor, and they look pretty good: Capture One Update Adds New AI Tools and Wireless Tethering for Fujifilm.
Looking ever forward, the technology behind DragGAN is impressive. It can adjust images by placing control points and moving them, using generative AI to create the intended result. It’s a research project now (here’s the paper with lots of examples and videos), but I’m sure we’ll see the technology in other areas soon.
Let’s Connect
Thanks again for reading and recommending Photo AI to others who would be interested. Send any questions, tips, or suggestions for what you’d like to see covered at jeff@jeffcarlson.com. Are these emails too long? Too short? Let me know.