Back to all posts

Developer Offer

Try ImaginePro API with 50 Free Credits

Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.

Start Free Trial

Readers Are Wary of AI Images in News

2025-11-07Sarah Scire4 minutes read
Artificial Intelligence
News Media
Audience Perception

The Rise of AI Imagery and Audience Uncertainty

What do news readers actually think about AI-generated images? While much research has focused on audience reactions to AI-generated text, there has been comparatively little on image generators like Midjourney, Adobe Firefly, and Dall-E. The conversation has gained new urgency with the release of advanced tools like OpenAI’s Sora 2, as high-quality AI videos, including some depicting real people, have begun to flood social media.

A Deep Dive into Public Perception

A new study in Digital Journalism titled, "Reality Re-Imag(in)ed. Mapping Publics’ Perceptions and Evaluation of AI-generated Images in News Context," directly addresses this question. The research, conducted by University of Amsterdam professors Edina Strikovic and Hannes Cools, involved four focus groups with 25 Dutch residents. While the findings have geographical limitations, they provide valuable, in-depth insights into public attitudes toward AI imagery in news.

The Challenge of Telling Real from Fake

Researchers first asked participants about their previous encounters with AI-generated images. Most said they rarely saw them in established news outlets but frequently encountered them on social media platforms like Instagram and TikTok. A significant finding was that most participants admitted they did not know how to reliably distinguish between a real photograph and an AI-generated one.

Some mentioned looking for subtle flaws, such as unusual lighting or an image that appears "too perfect." However, the majority relied on gut feelings or explicit labels. Participants felt it was much harder to verify the authenticity of an image than a piece of text, stating they wouldn't know where to begin to fact-check visual information.

A Line in the Sand Illustrative vs Photorealistic AI

When discussing the use of AI images by news organizations, participants made a clear distinction between illustrations and photorealistic content. Many were comfortable with news outlets using AI to generate charts, data visualizations, or satirical cartoons. As one participant noted, "If it has a guiding function, then I don’t have much objection to it. So illustrative, or a graph or something like that."

However, attitudes shifted dramatically based on the story's topic. While AI images for entertainment or "softer" news were seen as more acceptable, their use for serious topics like politics and conflict was considered entirely inappropriate.

Core Concerns Eroding Trust and Reality

The focus groups consistently voiced deep-seated anxieties about the broader consequences of news organizations adopting image generators. Many participants spoke of photographs as a form of "eyewitness" testimony—proof that an event occurred. They feared that an increased reliance on AI images would erode a sense of "shared reality."

Another major concern was algorithmic bias. Participants worried that news outlets using AI could inadvertently reinforce harmful stereotypes. One person pointed out that asking an AI for a "happy family" often results in a stereotypical white, nuclear family. Another added that prompts for a "doctor" typically yield a man, while a "nurse" yields a woman.

Finally, participants highlighted the passive nature of consuming images. Unlike text, which requires active reading, a photo can be scrolled past and internalized instantly. One person explained, "Forwarding a photo has so much more effect than an article... the image starts to live its own life."

The Verdict Risks Outweigh the Benefits

Across all focus groups, a central question emerged: do the benefits of using AI-generated images in news outweigh the harms? The consensus was a resounding no. Participants felt the potential gains for news organizations were "negligible when compared to the risks of an AI-generated reality."

The Call for Transparency and Its Paradox

If news outlets are to use AI images, participants were nearly unanimous in their demand for explicit labeling, similar to how sponsored content is marked. Some even requested that the labels include the specific AI tool and the text prompt used.

However, the researchers noted a "transparency paradox" in these demands:

"Participants consistently expressed a desire for clear disclosure of AI-generated content, they simultaneously demonstrated uncertainty about what specific information they needed and how they would use such disclosures. This paradox manifests in audiences wanting transparency while lacking frameworks for meaningfully processing that information."

You can explore the complete findings in the full study published in Digital Journalism.

Read Original Post

Compare Plans & Pricing

Find the plan that matches your workload and unlock full access to ImaginePro.

ImaginePro pricing comparison
PlanPriceHighlights
Standard$8 / month
  • 300 monthly credits included
  • Access to Midjourney, Flux, and SDXL models
  • Commercial usage rights
Premium$20 / month
  • 900 monthly credits for scaling teams
  • Higher concurrency and faster delivery
  • Priority support via Slack or Telegram

Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.

View All Pricing Details
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.