Googles New Pixel Camera AI Is Deceptively Powerful
Google's new Pixel 10 phones have arrived, and they've brought a host of generative AI features directly into the camera application. While modern phones commonly use "computational photography" to enhance images with lighting and post-processing effects, the introduction of generative AI elevates this to a completely new level—one that raises significant questions.
For years, tech enthusiasts have debated the question, "what is a photo?" as post-processing techniques create images that diverge from reality. We've seen night skies appear brighter or faces look smoother than they do in a mirror. Generative AI integrated into the camera is the ultimate expression of this dilemma. While these features can be useful, they force a philosophical debate: should a photo reflect what the photographer saw, or should it be as visually appealing as possible, even at the cost of realism?
This conversation has mostly been confined to niche circles, but as AI begins to add new objects or backgrounds to photos before you even open an editing app, it's a question for everyone. The way Google is implementing AI in its latest phones means you could end up with an AI-generated photo without even realizing it.
Pro Res Zoom: Enhancement or Fabrication?
Perhaps the most notable new AI camera feature is Pro Res Zoom. Google markets this as a "100x zoom," functioning much like the fictional "zoom and enhance" technology from crime shows.
When using a Pixel 10 Pro or Pro XL, you can push the zoom up to 100 times. On the surface, it feels like a standard digital zoom that relies on cropping. However, it runs into the fundamental problem that you cannot create resolution the camera never captured. If you zoom in to the point where the lens only sees a few blurry pixels, it's impossible to know what was truly there.
This is why Pro Res Zoom is more of an AI edit than a true 100x zoom. The phone zooms in as far as it can and then uses the resulting blurry pixels as a prompt for an on-device AI model. The AI then guesses what the object should look like and generates the result into your photo. It's not capturing reality, but a plausible reconstruction.
While this might be acceptable for inanimate objects like rock formations, it becomes problematic for faces or landmarks. You might believe you captured a great close-up of a singer at a concert, unaware that your "zoom" was essentially an AI image generation. Although Pro Res Zoom provides a non-AI version to choose from, the AI-enhanced option isn't clearly labeled during selection, which represents a more casual approach to AI integration than Google has taken before.
Ask to Edit: Simplifying Changes with Hidden AI
The casual integration of AI continues after the photo is taken. With the Pixel 10, you can now use natural language to ask AI to edit your photos directly within the Google Photos app. By opening a photo and tapping the edit icon, a chat box appears, allowing you to type or speak your desired changes.
On one hand, this is a great accessibility feature. It helps users apply simple crops or filters without navigating a potentially confusing interface of icons and sliders.
However, Ask to Edit also accepts more complex requests, and it doesn't distinguish when it's using generative AI to fulfill them. You could ask it to replace a background or remove reflections from a window. Many of these edits, even seemingly simple ones like glare removal, require generative AI. A user might ask the AI to "zoom out" on a photo, not realizing that this requires the AI to imagine and generate the surrounding environment, introducing a high risk of creating a fictional scene.
Camera Coach: A Transparent Use of AI
Then there's Camera Coach, a feature that uses AI in a more transparent way. Instead of putting AI into your photos, it analyzes what your camera sees and suggests better framing and angles, coaching you on how to achieve a better shot.
This approach is straightforward. The suggestions are just ideas, and the final photo you take is exactly what you see in the viewfinder, with no hidden AI modifications. This eliminates the concern of passing off an altered reality as truth. The worst-case scenario is frustration if the AI suggests an impossible shot, not an unknowingly fabricated image.
The Critical Need for AI Transparency
The debate over what constitutes a photograph isn't new. Some photos are meant to document reality, while others aim for aesthetic appeal. The issue with Google's new features isn't the use of AI itself, but how casually it's being implemented, blurring the line between traditional enhancement and outright generation.
Seeing AI image generation presented as "100x zoom" is alarming because it's not what users would reasonably expect. People should know when AI is being used on their photos so they can be confident about when a shot is realistic and when it's not. Labeling an AI-generated image as "100x zoom" is misleading.
Google is aware of this issue and has built C2PA content credentials into all photos taken on the Pixel 10, which indicates in the metadata if AI was used. However, this is not a practical solution for the average user, who is unlikely to check a photo's metadata. Features like Ask to Edit are designed for simplicity, and requiring users to manually check metadata contradicts that goal.
Users should be notified before they use an AI feature, not after the fact via hidden metadata. Other companies like Adobe already do this with a simple watermark on AI-generated projects. While opinions on AI imagery vary, users should never be in a position where they are creating it by accident. Of Google's three new AI camera features, only Camera Coach fully embraces this transparency.