Back to all posts

Why Apple Should Adopt Googles AI Image Tagging

2025-09-07Jeff Carlson5 minutes read
AI Ethics
Content Authenticity
Mobile Photography

The Rise of AI Cameras and the Need for Trust

Artificial intelligence is increasingly central to modern smartphone cameras. In Google's Pixel 10 Pro, nearly every major new camera feature relies on AI. Whether it's the Pro Res Zoom using generative AI to sharpen a 100x zoomed image or the Auto Best Take feature blending multiple shots to ensure everyone looks their best, AI is working behind the scenes.

However, alongside these advancements, Google has quietly introduced a critical feature that deserves more attention: C2PA content credentials. As AI's ability to create and manipulate images grows, the problem of AI-driven misinformation becomes more severe. C2PA, or the Coalition for Content Provenance and Authenticity, is an industry-wide initiative to create a standard for identifying whether an image has been created or edited with AI, helping to separate genuine content from sophisticated fakes.

While Google has joined this effort, Apple, the creator of the world's most popular cameras, has not. With millions of iPhones in the hands of users, it's time for Apple to implement this technology, potentially in its upcoming iPhone 17 cameras, to help build a more transparent digital ecosystem.

Understanding C2PA How Google Is Tagging Images

The C2PA initiative, originally founded by Adobe, works by embedding metadata, or content credentials, into media files to identify their origin and any subsequent edits. Starting with the Pixel 10 line, every single photo taken is embedded with this C2PA information. If you then use an AI tool in the Google Photos app to edit that photo, it will be flagged accordingly.

When you view an image's details in Google Photos on a supported device, you'll find a new section titled "How this was made." An unedited shot will simply state, "Media captured with a camera." But if a feature like Pro Res Zoom was used, the information will change to "Edited with AI tools."

Two screenshots showing a photo of a lighthouse on the left and its C2PA information on the right. A photo captured by the Pixel 10 Pro XL includes C2PA information indicating that AI tools were used, in this case Pro Res Zoom, which uses generative AI to rebuild an image zoomed at 100x.

Similarly, if you use a feature like Help me edit to completely replace the background of an image, the resulting photo will also be tagged as AI-edited.

Three screenshots showing the process of editing a photo to replace the background using AI. Using Google's descriptive editing tool in Google Photos adds the "Edited with AI tools" indicator because the background has been replaced with an AI-generated one.

Google differentiates between generative AI and the standard computational photography that all modern smartphones use. The machine learning that merges exposures and identifies scenes is labeled "Edited with non-AI tools." The system isn't perfect yet; for example, an AI-generated video clip created in Google Photos didn't display a C2PA tag, though it did have a "Veo" watermark.

Four frames from an AI-generated video showing a man throwing confetti. These video frames were AI generated from a still photo (left), but because the result is a video, Google Photos isn't showing a C2PA tag.

A New Standard Why Every Photo Needs a Birth Certificate

The most critical aspect of Google's strategy isn't just about flagging AI edits; it's about adding C2PA data to every single photo the camera captures, edited or not. The ultimate goal isn't to single out AI content but to establish a new baseline for trust. By creating a world where most legitimate photos have verifiable origin information, those that lack it will naturally become more suspect.

Isaac Reynolds, a group product manager for Pixel cameras, explained the strategy: "The reason we are so committed to saving this metadata in every Pixel camera picture is so people can start to be suspicious of pictures without any information. We're just trying to flood the market with this label so people start to expect the data to be there."

This is the core of the argument. It's about creating a system where you can check the provenance of any image to help you make a more informed judgment, especially when it concerns news events or potential scams.

The Missing Piece Why Apples iPhone Is Crucial to This Effort

For this 'flood the market' strategy to succeed, it needs buy-in from the biggest players. Google is not alone; Samsung also adds AI watermarks and content credentials to images using its AI tools. However, Apple's absence is significant. The company is not currently listed as a member of the C2PA, and its iPhones are arguably the most ubiquitous image-capture devices on the planet.

By adopting C2PA and tagging every photo taken with an iPhone, Apple could instantly and massively increase the volume of authenticated images in the world. This would lend enormous weight to the standard, encouraging more companies to join and accelerating the shift toward a new norm where untagged images are rightly viewed with skepticism.

Given Apple's market influence and its user base's sheer scale, its participation wouldn't just be helpful; it would be transformative. Adding C2PA credentials to the iPhone would be a powerful step in the right direction for combating misinformation and fostering a more trustworthy digital landscape.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.