Retour à tous les articles

Offre pour développeurs

Essayez l'API ImaginePro avec 50 crédits gratuits

Créez des visuels propulsés par l'IA avec Midjourney, Flux et plus encore — les crédits gratuits se renouvellent chaque mois.

Commencer l'essai gratuit

Readers Are Wary of AI Images in News

2025-11-07Sarah Scire4 minutes de lecture
Artificial Intelligence
News Media
Audience Perception

The Rise of AI Imagery and Audience Uncertainty

What do news readers actually think about AI-generated images? While much research has focused on audience reactions to AI-generated text, there has been comparatively little on image generators like Midjourney, Adobe Firefly, and Dall-E. The conversation has gained new urgency with the release of advanced tools like OpenAI’s Sora 2, as high-quality AI videos, including some depicting real people, have begun to flood social media.

A Deep Dive into Public Perception

A new study in Digital Journalism titled, "Reality Re-Imag(in)ed. Mapping Publics’ Perceptions and Evaluation of AI-generated Images in News Context," directly addresses this question. The research, conducted by University of Amsterdam professors Edina Strikovic and Hannes Cools, involved four focus groups with 25 Dutch residents. While the findings have geographical limitations, they provide valuable, in-depth insights into public attitudes toward AI imagery in news.

The Challenge of Telling Real from Fake

Researchers first asked participants about their previous encounters with AI-generated images. Most said they rarely saw them in established news outlets but frequently encountered them on social media platforms like Instagram and TikTok. A significant finding was that most participants admitted they did not know how to reliably distinguish between a real photograph and an AI-generated one.

Some mentioned looking for subtle flaws, such as unusual lighting or an image that appears "too perfect." However, the majority relied on gut feelings or explicit labels. Participants felt it was much harder to verify the authenticity of an image than a piece of text, stating they wouldn't know where to begin to fact-check visual information.

A Line in the Sand Illustrative vs Photorealistic AI

When discussing the use of AI images by news organizations, participants made a clear distinction between illustrations and photorealistic content. Many were comfortable with news outlets using AI to generate charts, data visualizations, or satirical cartoons. As one participant noted, "If it has a guiding function, then I don’t have much objection to it. So illustrative, or a graph or something like that."

However, attitudes shifted dramatically based on the story's topic. While AI images for entertainment or "softer" news were seen as more acceptable, their use for serious topics like politics and conflict was considered entirely inappropriate.

Core Concerns Eroding Trust and Reality

The focus groups consistently voiced deep-seated anxieties about the broader consequences of news organizations adopting image generators. Many participants spoke of photographs as a form of "eyewitness" testimony—proof that an event occurred. They feared that an increased reliance on AI images would erode a sense of "shared reality."

Another major concern was algorithmic bias. Participants worried that news outlets using AI could inadvertently reinforce harmful stereotypes. One person pointed out that asking an AI for a "happy family" often results in a stereotypical white, nuclear family. Another added that prompts for a "doctor" typically yield a man, while a "nurse" yields a woman.

Finally, participants highlighted the passive nature of consuming images. Unlike text, which requires active reading, a photo can be scrolled past and internalized instantly. One person explained, "Forwarding a photo has so much more effect than an article... the image starts to live its own life."

The Verdict Risks Outweigh the Benefits

Across all focus groups, a central question emerged: do the benefits of using AI-generated images in news outweigh the harms? The consensus was a resounding no. Participants felt the potential gains for news organizations were "negligible when compared to the risks of an AI-generated reality."

The Call for Transparency and Its Paradox

If news outlets are to use AI images, participants were nearly unanimous in their demand for explicit labeling, similar to how sponsored content is marked. Some even requested that the labels include the specific AI tool and the text prompt used.

However, the researchers noted a "transparency paradox" in these demands:

"Participants consistently expressed a desire for clear disclosure of AI-generated content, they simultaneously demonstrated uncertainty about what specific information they needed and how they would use such disclosures. This paradox manifests in audiences wanting transparency while lacking frameworks for meaningfully processing that information."

You can explore the complete findings in the full study published in Digital Journalism.

Lire l'article original

Comparer les plans et tarifs

Trouvez la formule adaptée à votre charge de travail et débloquez l'accès complet à ImaginePro.

Comparatif des tarifs ImaginePro
PlanTarifPoints clés
Standard$8 / mois
  • 300 crédits mensuels inclus
  • Accès aux modèles Midjourney, Flux et SDXL
  • Droits d'utilisation commerciale
Premium$20 / mois
  • 900 crédits mensuels pour les équipes en croissance
  • Plus de parallélisme et des livraisons plus rapides
  • Support prioritaire via Slack ou Telegram

Besoin de conditions personnalisées ? Parlons-en pour ajuster crédits, limites ou déploiements.

Voir tous les détails tarifaires
ImaginePro newsletter

Abonnez-vous à notre newsletter !

Abonnez-vous à notre newsletter pour recevoir les dernières nouvelles et créations.