Photojournalist Exposes AI Image Deception And Copyright Risks
A Photojournalist's Journey into AI's Impact
After a distinguished quarter-century career as a photojournalist with the St. Louis Post-Dispatch, David Carson, a Pulitzer Prize winner, took a hiatus last year. Instead of a traditional break, Carson dedicated his time to exploring the burgeoning intersection of journalism and artificial intelligence.
Currently a 2025 John S. Knight Journalism Fellow at Stanford University, Carson's interest in AI was piqued by a series of concerning incidents. He observed AI-generated images, such as those falsely depicting Donald Trump’s arrest or smoke billowing from the Pentagon, rapidly spreading misinformation, highlighting the technology's potential for deception.
St. Louis Post-Dispatch photojournalist David Carson. Courtesy of David Carson.
The Rise of Believable Fakes and AI's Role
The creation of manipulated images depicting fabricated events or individuals is not a new phenomenon. Carson acknowledged:
People could do this for years with Photoshop, but creating high-quality fakes really took a lot of technical skill. What Dall-E and these AI image generators did was lower the bar to creating images that were believable.
This accessibility, he implies, amplifies the risk of misuse.
Testing AI: An Experiment Reveals Copyright Concerns
Carson delved deeper, investigating how AI creates these images and questioning whether the AI "learning" process, often involving scraping vast amounts of online data, amounted to copyright infringement. To test this, he conducted an experiment: he prompted an AI image generator to create a specific scene – a protester in an American flag T-shirt throwing a tear gas canister during the Ferguson Uprising.
While such a prompt could theoretically yield countless visual interpretations, Carson was shocked by the outcome. The AI quickly generated an image remarkably similar to a photograph captured by Post-Dispatch photojournalist Robert Cohen on August 13, 2014. This original photograph was a key part of the 2015 Pulitzer Prize-winning submission by Carson and Cohen for their coverage of the 2014 Ferguson protests.
Is AI Learning or Stealing? A Troubling Discovery
For Carson, the AI's production of an image so closely resembling Cohen's work was a clear indication that the technology was not "learning" in an abstract sense. Instead, he concluded, it was effectively "stealing" by replicating existing copyrighted material it had processed from training data or scraped from public websites.
Carson explained his choice for the experiment:
I thought that Robert’s photo would be a good example, because it is an iconic photograph, probably the best-known photograph from the protest. Within a few pretty simple prompts, we ended up at something that I think was pretty clearly a copyright violation. That was really troubling to me.
Eroding Trust: The Impact of AI on Visual Truth
Carson elaborated on the broader implications:
I think it confuses the public as to what's real and what's not. We're used to trusting our eyes. And I'm sort of fascinated with us being in this world, in this time now, where it becomes more difficult to trust what we see.
This erosion of trust in visual media is a significant concern for the veteran photojournalist.
Hear More: David Carson on St. Louis on the Air
For a comprehensive discussion with photojournalist David Carson, including more insights from his research into AI images and his detailed argument on why AI companies face a copyright dilemma, you can listen to his interview on St. Louis on the Air. The episode is available on Apple Podcasts, Spotify, or YouTube.
St. Louis on the Air brings you stories from St. Louis and its vibrant community. The show is produced by Miya Norfleet, Emily Woodbury, Danny Wicentowski, Elaine Cha, and Alex Heuer. The audio engineer is Aaron Doerr. Questions and comments about this story can be sent to talk@stlpr.org.