AI Is Secretly Editing Your Content Without Consent
The lines between reality and digital alteration are blurring faster than ever, with AI at the helm. This conversation has been reignited by YouTube's recent use of AI tools to enhance videos on its platform—a move made without the knowledge or consent of the creators involved. Viewers were also left in the dark.
Without transparency, we have little power to identify, let alone challenge, AI-edited content. Yet, this kind of distortion has a long history that predates today's sophisticated AI.
A History of Invisible Edits
Platforms like YouTube are not the first to engage in subtle image manipulation. For decades, lifestyle magazines have “airbrushed” photos to alter celebrity features, often without their knowledge. In 2003, actor Kate Winslet famously criticized British GQ for digitally altering her cover photo to narrow her waist without her consent.
The public has also embraced image editing. A 2021 study of 7.6 million Flickr photos found that filtered images were more likely to receive views and engagement. However, YouTube's recent actions highlight a critical shift where users are no longer in control.
Modern Platforms and Algorithmic Alterations
This isn't an isolated incident in the digital age. TikTok faced a similar controversy in 2021 when a “beauty filter” was automatically applied to some users' videos without their consent. This is particularly concerning as research has linked the use of such filters to negative self-image.
Undisclosed alterations have occurred offline too. In 2018, new iPhones were found to be automatically “smoothing” users' skin, a feature Apple later called a bug and reversed. The issue also hit the political sphere when Nine News published an AI-modified photo of an Australian MP that altered her clothing.
The problem extends beyond visuals. In 2023, author Jane Friedman found Amazon selling AI-generated books falsely attributed to her, posing a threat to her reputation. In every case, the algorithmic changes were presented to the public without any disclosure.
The Disclosure Dilemma Why Transparency Is Tricky
Disclosure is one of our simplest tools for adapting to an AI-mediated reality. Studies suggest that companies transparent about their AI use are more likely to be trusted. While global trust in AI systems is low, people tend to trust AI they have used themselves, believing it will improve over time.
So why do companies avoid disclosure? It can be a double-edged sword. Research shows that revealing AI use consistently reduces trust, though not as much as being caught hiding it. Furthermore, the impact is complex. Disclosures on AI-generated misinformation may not make it less persuasive, but they can make people hesitate to share it.
Navigating an AI Generated World
It will only become harder to identify manipulated AI content, as even sophisticated AI detectors remain a step behind. Another major challenge is confirmation bias—our tendency to be less critical of media that confirms our existing beliefs.
Fortunately, there are strategies we can use. Younger media consumers have developed methods like triangulation, which involves checking multiple reliable sources to verify information. Users can also curate their social media feeds to prioritize trusted sources. This is an uphill battle, however, as platforms like YouTube and TikTok favor an infinite scroll model that encourages passive consumption.
While YouTube’s decision is likely within its legal rights, it puts users and creators in a difficult position. Given the immense power of digital platforms, this probably won’t be the last time we see reality manipulated without our consent.