Back to all posts

The Dark Side of AI Fake Images and Misinformation

2025-10-11Daniel Keane, Eva Blandis6 minutes read
Misinformation
Artificial Intelligence
Digital Literacy

The harrowing thought of a young boy lost in the vast outback is distressing enough. But in the case of missing four-year-old Gus, this tragedy has been compounded by a disturbing trend: the use of artificial intelligence to create and spread misinformation on social media.

In the weeks since Gus vanished from his family's remote homestead in South Australia, deceptive AI-generated images of the boy have surfaced online. These false reports and manipulated photos, including some depicting the search, are being shared across social media, prompting alarm from technology and legal experts about the ease of producing harmful, deceitful content.

Drone shot of an outback landscape with small bushes. The type of terrain that characterised the search area. (ABC News: Justin Hewitson)

While not all of these posts have gained significant traction, enough have circulated to raise important questions: How can we distinguish fact from fiction, and what can be done to combat content that preys on public emotion?

The Spread of AI-Generated Falsehoods

Recently, a particularly troubling post circulated on Facebook featuring an AI-generated image of a boy with long blonde hair being held by a man near a four-wheel drive, accompanied by the text, "Is this a kidnapping case?"

This image appeared on a Facebook page that has published over 20 fake posts about Gus in just five days. These posts include fabricated breakthroughs in the case and even a staged reunion with loved ones, complete with police in American-style uniforms, under the headline "a miracle in the outback." Some of these posts have attracted hundreds or thousands of reactions, comments, and shares.

A tracker wearing a large akubra-style hat and sitting on a green trail bike looks across a rural property A tracker was called in to help in the search. (ABC News: Daniel Taylor)

Flinders University law lecturer Joel Lisk highlights the multifaceted harm caused by such images.

"We've got the emotional harm and distress that's caused by that kind of content being released publicly and being thrown into the public domain when family are emotionally distraught," he said. "It might create either false hope or, on the flip side, distress that people are taking advantage of their personal harm and their circumstances for what is effectively clickbait or generating traffic on an online platform. It also negatively impacts reliability and trust more broadly."

While many users have condemned the posts, writing comments like, "This is sick! Don't play with people like this, it's not cool," the damage is often already done. The posts link to fraudulent news stories, and attempts to contact the page administrators are met with dead ends.

How to Spot an AI-Generated Fake

Generative AI technology is still in its infancy but is evolving with incredible speed. According to RMIT computing expert Michael Cowling, these tools work by scraping vast amounts of information from existing images to create a new one.

While some fakes are obvious, others are more subtle. However, there are common flaws to look for. Artificially generated images often contain noticeable blemishes.

"For the time being … it does still have trouble with lighting, with depth, with shadows, it's historically had trouble with generating hands or positioning limbs in the right place, or smoothing out differences between backgrounds and foregrounds," Professor Cowling explained.

He refers to this as the 'uncanny valley.'

"The 'uncanny valley' is when you see an image and you can't quite work out what's wrong with it but you know it's not right."

The Motives Behind the Misinformation

Why would someone deliberately create and share false information about a missing child? The motivations can be murky, but Dr. Lisk suggests two primary drivers.

A boot print in red sand which is surrounded by animal droppings Police released an image of a footprint on the property they believe could have been left by Gus. (Supplied: SA Police)

One possibility is sheer malice. "Unfortunately, people can be horrible and they're doing it to take advantage of a situation and to see how much traction they get with this content," he said.

The other major factor is money. Many of these disreputable websites are plastered with advertisements. "If content creators can develop pages that have high followings and high reach, there is potential there in the future for them to use those pages and those platforms to develop revenue for themselves through ads or sponsored posts," Dr. Lisk noted. "The more outlandish, the more horrifying … content they post... it can increase their personal revenue."

This incident raises the question of whether new laws are needed to outlaw fake images, especially during active missing persons investigations. Dr. Lisk says that while some broad consumer protection laws against misleading and deceptive conduct could potentially apply, there is room to strengthen legislation.

"You can create a law that perhaps prohibits the use of generative AI content in connection with active police investigations."

However, the primary challenge is enforcement. "We can serve take-down notices... but at the end of the day every time someone posts this content, it spreads like wildfire and you end up with hundreds and hundreds of different versions of the same bit of misinformation," Dr. Lisk added.

A portrait photo of an academic. Dr. Lisk said there could be a range of motivations behind the deceptive content. (Supplied: Flinders University)

Proposed federal laws to regulate misinformation were abandoned late last year, but Professor Cowling believes lawmakers must not give up on reform.

"Yes, I think we should probably try and do that as quickly as we can because ChatGPT and generative AI is changing the world very quickly."

In the short term, solutions like requiring AI companies to watermark their generated images and text could help. The challenge lies in getting universal agreement among the rapidly growing number of AI developers.

Your Best Defense Critical Thinking and Source Verification

As AI technology becomes more sophisticated, visual tells may become harder to spot. Therefore, one of the most crucial clues isn't in the image itself, but in its context.

A row of SES volunteers in orange and police officers in navy uniforms walk across arid ground Experts have warned of the issues associated with AI generated images. (ABC News: Daniel Taylor)

Professor Cowling stresses the importance of verifying the source. Is it a reputable news organization or an anonymous social media page?

"As social media has taken hold... we need to teach people to be a little bit more critical of what they're looking at," he said. "Understanding the source that something came from — when it was shared, who it was shared by … I think that [principle] still applies."

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.