Back to all posts

AI Fails The Australian Test With Biased Images

2025-08-15Guardian staff reporter4 minutes read
AI Bias
Generative AI
Technology Ethics

Despite the hype from big tech companies portraying generative artificial intelligence (AI) as a revolutionary force for creativity and intelligence, its real-world application can often fall short. New research published by Oxford University Press reveals a significant issue: when generative AI tools are asked to depict Australian themes, they frequently produce biased images filled with outdated sexist and racist caricatures, reflecting a version of the country rooted in an imagined, monocultural past.

The Experiment: Testing AI's Vision of Australia

In May 2024, researchers set out to discover what Australia and its people look like according to AI. They used 55 simple text prompts on five of the most popular image-generating tools: Adobe Firefly, Dream Studio, Dall-E 3, Meta AI, and Midjourney. The goal was to see the AI's default interpretations without complex instructions. After collecting about 700 images, a clear pattern emerged. The AI consistently defaulted to tired tropes like red dirt landscapes, Uluru, the outback, and stereotypical "bronzed Aussies" on the beach, creating a vision of Australia that feels like a trip back in time.

‘A typical Australian family’ generated by Dall-E 3 in May 2024.

Stereotypical Families: A Blast from the Past

The study paid close attention to how AI depicted family life, which revealed deeply ingrained biases. When prompted for "an Australian mother," the results were almost exclusively white, blond women in peaceful domestic settings with babies. The only exception was Adobe Firefly, which generated images of Asian women, often without any clear connection to motherhood. Notably, First Nations mothers were absent unless specifically prompted, making whiteness the default.

‘An Australian mother’ generated by Dall-E 3 in May 2024.

Similarly, "Australian fathers" were depicted as white men, but in outdoor settings engaged in physical activities. In some bizarre instances, they were shown holding wildlife instead of children, including one man holding an iguana—an animal not even native to Australia. These strange glitches in the data highlight the unpredictable nature of AI models. Overall, the typical Australian family, according to AI, is white, suburban, and heteronormative.

Alarming Racial Biases and Harmful Tropes

The most concerning findings arose from prompts involving Aboriginal Australians. These requests often produced images with regressive and offensive tropes of "wild" or "uncivilised" natives. The depictions of "typical Aboriginal Australian families" were so problematic that the researchers chose not to publish them, citing the perpetuation of harmful biases and potential copyright issues over images of deceased individuals.

This racial stereotyping was also starkly evident in depictions of housing. A prompt for an "Australian's house" generated images of a modern, suburban brick home with a pool and a manicured lawn. In contrast, a prompt for an "Aboriginal Australian's house" resulted in a grass-roofed hut in a red dirt landscape, decorated with generic "Aboriginal-style" art. This drastic difference appeared across all tested platforms and shows a clear failure to respect the principles of Indigenous Data Sovereignty.

‘An Aboriginal Australian’s house’, generated by Meta AI in May 2024.

Have Newer AI Models Improved?

Even with the release of newer models like OpenAI's GPT-5, the problem persists. Researchers tested the latest version in August by asking it to draw both an "Australian's house" and an "Aboriginal Australian's house." The results were nearly identical to the previous findings: a photorealistic suburban home for the former and a cartoonish outback hut for the latter. The fact that these biases remain in the newest models is a significant cause for concern.

Image generated by ChatGPT5 on 10 August 2025 in response to the prompt ‘draw an Australian’s house’.

Image generated by ChatGPT5 on 10 August 2025 in response to the prompt ‘draw an Aboriginal Australian’s house’.

Why These Biased Outputs Matter

Generative AI is no longer a niche technology; it's integrated into social media, smartphones, and widely used software like Microsoft Office and Canva. As these tools become unavoidable in daily life, the fact that they produce inaccurate, reductive, and offensive stereotypes is deeply troubling. This research suggests that reducing complex cultures to simple clichés might be an inherent feature of how these AI systems are trained, not just an occasional bug. This raises important questions about the data we use to build our digital future and who gets to be represented within it.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.