Back to all posts

How AI Is Finally Learning To See Disability

2025-10-15Yasmin Rufo4 minutes read
AI Bias
Representation
Technology

When former Australian Paralympic swimmer Jess Smith decided to use an AI image generator to touch up a headshot, she inadvertently started a social experiment. Her experience sheds light on the deep-seated biases within artificial intelligence and the recent, yet crucial, steps being made toward a more inclusive digital world.

Jessica Smith wearing a black swimming costume

A Picture AI Couldn't Paint

Jess uploaded a photo of herself and prompted the tool with a very specific request: create an image of her, a woman missing her left arm from below the elbow. The results were telling. Despite repeated attempts, ChatGPT either generated an image of a woman with two arms or equipped her with a metal prosthetic. The AI simply couldn't comprehend her reality.

When she asked the AI why it was struggling, the answer was straightforward: it didn't have enough data to work with. "That was an important realisation for me that of course AI is a reflection of the world we live in today and the level of inequality and discrimination that exists," Smith says.

A Breakthrough in Representation

Just a few months later, Jess tried again. This time, the result was astonishingly different. The AI successfully generated an accurate image of a woman with one arm, just like her. "Oh my goodness, it worked, it's amazing it's finally been updated," she told the BBC. "This is a great step forward."

AI image of a woman with one arm

This might seem like a small technical fix, but its significance is profound for millions of people with disabilities. As Jess puts it, "Representation in technology means being seen not as an afterthought, but as part of the world that's being built... This is more than progress in tech it's progress in humanity."

OpenAI, the company behind ChatGPT, acknowledged it had "made meaningful improvements" to its model but noted that "challenges remain, particularly around fair representation."

The Unseen Biases That Remain

While Jess's story marks a victory for representation, others show how far AI still has to go. Naomi Bowman, who has sight in only one eye, encountered a similar bias. When she asked ChatGPT to simply blur the background of a photo, it altered her face and "evened out" her eyes without her permission.

Naomi Bowman's real photo compared to an AI-edited one

"Even when I specifically explained that I had an eye condition and to leave my face alone; it couldn't compute," she says. Initially finding it funny, she now feels it highlights an "inherent bias within AI." Naomi is calling for AI models to be trained on broader data sets to ensure everyone is represented fairly.

Building a More Inclusive AI

Experts agree that AI bias is often a mirror of societal blind spots. Abran Maldonado, CEO of Create Labs, emphasizes that diversity must be present from the very beginning of the development process. "It's about who's in the room when the data is being built," he explains. Without consulting people with lived experiences, AI will inevitably miss them.

This issue isn't new. A 2019 US government study revealed that facial recognition algorithms were significantly less accurate at identifying African-American and Asian faces than Caucasian faces. This shows a long-standing pattern of bias baked into technology.

For Jess, the problem isn't her disability but the societal barriers created by a world not designed for everyone. Whether it's a public restroom tap that requires two hands or an AI that can't comprehend one, the oversight is the same. She believes there's a serious risk of repeating these mistakes in the digital world. The conversation around disability can be awkward, but as she notes, backing away from it only ensures that these exclusionary systems continue to be built.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.