I’d like to say that the delay between this post and the last was because I was still on vacation, but actually it’s due to catching up after vacation. And boy is there a lot to catch up on. I finished a giant project this week that you’ll hear more about soon, and treated myself to a boozy shake and sweet-potato tots. And then slept a lot.
First Thing(s)
In the last edition, I pointed to a video of Aaron Hockley and I talking about the state of Photography AI. As promised then, we did a followup live Q&A to extend that discussion and answer your queries. Thanks to the folks who sent in questions! You can catch that video here. And feel free to send me any questions you still have; I’ll do my best to answer them.
The latest episode of the PhotoActive podcast is out, where Kirk McElhearn and I talk about the new AI-assisted Denoise and Healing features in Lightroom and other apps, including the new Remove tool in the public beta of Photoshop. I’m finding that Denoise in Lightroom works particularly well, but our discussion wasn’t only about what it can do. Kirk was just as curious to know how often photographers find themselves needing these capabilities. Listen and subscribe now at PhotoActive #140 Denoise and Healing Tools.
Speaking of denoising, definitely go read Adobe’s blog post that breaks down how the Denoise tool works, Denoise Demystified. It also answers some of the questions that Kirk and I had about what the feature is looking for and how it’s trained using machine learning (ML) models.
“As with our previous Enhance features, the idea is to train a computer using a large set of example photos. Specifically, we used millions of pairs of high-noise and low-noise image patches so that the computer can figure out how to get from one to the other.”
Here Comes AI in Political Ads
With the improving quality of AI-generated imagery, audio, and video, we knew this time would come. We got the first widespread example when Donald Trump claimed that he would be criminally indicted on a specific day. That day arrived without any actual indictment, but AI-generated photos spread online depicting him running from police, taking up arms, and ultimately getting forcibly arrested—none of which happened.
Now, the official political machines are getting into the act. On the day that President Biden announced he would be running again in 2024, the RNC (Republican National Committee) released a video containing entirely AI-generated images depicting a made-up future if Biden is re-elected: Republicans counter Biden announcement with dystopian, AI-aided video (Washington Post gifted link, no subscription required). WaPo didn’t actually link to the ad itself, which you can find here at The Daily Beast.
To be honest, I’m surprised that the video, composed of AI-generated still images, includes a (very) small disclaimer in the top-left corner that reads, “Built entirely with AI Imagery.” And I’ll give them props for doing that; I think any political ad should by law include such a disclaimer. But I’m not optimistic that such a thing will happen.
So that means we all have to be even more critical of the ads we see and hear moving forward with this election cycle (and in advertising, and news, etc.). I feel like my critical-analysis skills are pushed every day, with junk email and unsolicited phone calls and texts, so I’m not looking forward to this.
It’s not like this is the first time political ads have been disingenuous or stretched an idea to make a point. The famous Daisy ad for Lyndon Johnson’s campaign depicts a young girl counting petals as she pulls them off a daisy, then shifts to a military countdown leading to images of nuclear explosions.
There isn’t anything inherently “AI” about the RNC ad. The point is that GenAI is able to make nearly any unreal image, like a former president being arrested or a candidate modeled into a compromising situation. Perhaps more important, those ads can be generated, edited together, and released quickly and inexpensively without a campaign needing to record or source stock imagery, deal with licensing, or other traditional aspects.
Keep a sharp eye, friends. We’re going to need to look at everything more critically going forward.
Further
With that all said, it’s clear that at least in current incarnations, making convincing artificial video is difficult and expensive. For a lighter-hearted look, be sure to watch this KitKat ad that playfully leans into the state of GenAI.
And this might be my favorite thing of last week, an entirely AI-generated (script, images, video, and voiceover) commercial for a made-up restaurant called Pepperoni Hug Spot. The people are the stuff of nightmares, but the tagline is brilliant: “Like family, but with more cheese!” I’ll take two please.
So how well can these technologies imitate a real person? Joanna Stern at the Wall Street Journal decided to find out in “I Challenged My AI Clone to Replace Me for 24 Hours.”
Two Adobe Firefly things: Howard Pinksy showed off an example of the upcoming “inpainting” feature, where you will be able to upload your own image, paint out an area, and then describe what you want to appear in that space.
Also, Adobe will be adding Firefly technology to its video and audio tools, including automatic tone color mapping: take a scene and specify that you want it to look warmer by typing that command, and it will make the change. Or you can insert B-roll footage, music, or sound effects by typing what should appear. Be sure to watch the video on Adobe’s blog page.
Let’s Connect
Thanks again for reading and recommending Photo AI to others who would be interested. Send any questions, tips, or suggestions for what you’d like to see covered at jeff@jeffcarlson.com. Are these emails too long? Too short? Let me know.