Starting next week, the Google Photos app will add new disclosures when a photo is edited with one of its AI features, including Magic Editor, Magic Eraser, and Zoom Enhance. When you click on a photo on Google Photos, scroll to the bottom of the “Details” section and you’ll see a disclosure that says when the photo was “edited with Google AI.”
Google says it’s introducing this disclosure to “further improve transparency,” but it’s not as obvious when a photo has been edited by AI. There is still no visual watermark within the photo frame to indicate that the photo is being generated. When someone sees a photo edited by Google’s AI in a text message on social media, or even while scrolling through the Photos app, they won’t immediately see that the photo has been composited.
Google announced new disclosures for AI photography in a blog post on Thursday, after Google announced its new Pixel 9 phones with these AI photo editing features. This disclosure appears to be a response to the backlash Google has received for widely distributing these AI tools without visual watermarks that can be easily read by humans.
As for Google’s other new photo editing features that don’t use Google’s generative AI, Google Photos now shows that those photos have been edited in metadata, but not in the (Advanced) tab. Masu. These features allow you to edit multiple photos together and display them as one clean image.
These new tags don’t exactly solve the main problems people have with Google’s AI editing features. Not having a visual watermark on the photo frame (at least one that can be seen at a glance) might help people feel like they won’t be fooled, but Google doesn’t have them.
All Google AI-edited photos already reveal that they were edited by AI in the photo’s metadata. Now you also have an easy-to-find disclosure under the (Advanced) tab in Google Photos. But the problem is that most people don’t look at the metadata or details tabs of photos that are available on the internet. They just look and scroll without further investigation.
To be fair, visual watermarks in the frame of an AI photo are not a perfect solution either. You can easily crop or edit these watermarks to remove them, and then you’re back to square one. We contacted Google to ask if it was working on anything to help people quickly identify whether a photo has been edited by Google AI, but we didn’t receive an immediate response. There wasn’t.
The proliferation of Google’s AI imaging tools could increase the amount of synthetic content people view on the internet, making it difficult to tell what’s real and what’s fake. The approach Google has taken, using metadata watermarks, relies on platforms to indicate to users that they’re viewing AI generated content. Meta already does this on Facebook and Instagram, and Google will flag AI images in search later this year. He says he plans to set up a. However, other platforms have been slow to catch up.