Why Generative AI Fails at Plain Language Accessibility
Plain language is an important part of making things accessible for disabled people. We are very worried that people are using artificial intelligence to translate text into plain language without realizing that it cannot do that work correctly. We tested multiple artificial intelligence models, and all of them made big mistakes that changed the meaning of the text. For this and other reasons, we call on other organizations not to use artificial intelligence for plain language translation.
Understanding AI and the Push for Plain Language
Artificial intelligence, or AI, is when a computer program performs tasks that typically require human intelligence. You interact with AI every day, from the spam filter in your email to machine translators that convert text from one language to another. While these tools can be helpful, they aren't perfect and often make mistakes.
Recently, the conversation has shifted to generative AI. This specific type of AI, including tools like ChatGPT, Google Gemini, and Microsoft Copilot, uses existing data to create entirely new content, such as text, images, or music.
People have started using generative AI to create plain language versions of their work. For example, a researcher might ask an AI to "Rewrite this paper so it is at a 6th grade reading level." However, when a person with an intellectual disability tries to read this AI-generated "plain language" paper, they often find it confusing, incomplete, and difficult to understand. This is because using generative AI for accessibility creates more problems than it solves.
Why Generative AI Fails: It Changes the Meaning
The words we use matter, but generative AI often changes them in ways that alter the entire meaning of a document.
For instance, a paper explaining that sheltered workshops are harmful to disabled workers could be rewritten by an AI to sound like they are a good thing. The AI might misinterpret the words "shelter" or "workshop" or pull from biased data sources that support these harmful institutions.
In an experiment at ASAN, we fed an AI text about how autistic people deserve rights. The AI added new, unprompted sentences about autistic people having rare and amazing talents. This is a harmful stereotype that we would never promote. We believe autistic people deserve rights regardless of any perceived "special talents."
Generative AI also struggles to tell fact from fiction. If it learns from data containing lies, it will repeat those lies. You may have seen headlines about an AI that told people to put glue in their pizza. When information needs to be accurate and trustworthy, especially for the disability community, generative AI is not a reliable tool.
A Concept Too New: AI Can't Grasp Plain Language
To work well, generative AI needs a massive amount of data to build a "model," which is a guide it uses to respond to prompts. There are millions of cat pictures online, so an AI can build a good model of a cat.
However, there are not a lot of high-quality plain language documents available for AI to learn from. The concept is too new and nuanced. Without a good model, the AI is essentially guessing, which is why it makes so many mistakes when attempting to write in plain language.
AI Focuses on Words, Not on Explaining Ideas
A key part of plain language isn't just using simpler words; it's about explaining complex ideas more clearly and in greater detail. Generative AI fails at this.
When asked to simplify a text, an AI might just swap out words or remove sentences it deems "complex," which can strip out critical information. It doesn't know which concepts need more detailed explanations or which difficult words require a definition. If you ask it to create a definition, that definition might be wrong or not fit the context of the paper. Plain language is about adding clarity, not just removing words.
The Built-In Bias and Discrimination of AI Models
AI models are trained on data from the internet, which is full of human biases. Since these tools are often built by people in positions of power, they can perpetuate and amplify discrimination against marginalized groups.
Discrimination means being treated unfairly because of who you are. Studies have found that image-generating AI tools produce racist stereotypes, such as only showing white men as doctors and people of color in service jobs. You can read more about how generative AI shows racist stereotypes.
This discrimination extends to language. By misrepresenting an author's stance on sheltered workshops, the AI is promoting an ableist viewpoint. Furthermore, generative AI often equates a lower reading level with writing for children. It adopts a condescending and infantilizing tone, which is unfair and disrespectful to adults with disabilities who deserve to be treated with dignity.
Plain Language Belongs to the Disability Community
Plain language is effective because people with disabilities are directly involved in creating it. Their lived experience is essential for making sure the writing is truly accessible. An AI does not have this experience and never can.
Furthermore, writing and testing plain language documents is a profession for many disabled people. It is a job that is often more accessible than other types of work. Using generative AI to do this work takes jobs away from the disability community.
The best way to ensure your materials are accessible is to work with and fairly pay people with disabilities to write and review them. This follows the core principle of the disability rights movement: Nothing about us without us!
The Path Forward: Human-Centered Accessibility
While we strongly advise against using generative AI for plain language, some other AI tools can be helpful. Non-generative tools like reading level checkers (e.g., Readable, Hemingway) can analyze your text and highlight complex sentences or words. They give you advice but leave the crucial work of rewriting to you, the human author.
Ultimately, creating accessible materials requires human connection. Always talk to disabled people first. Their feedback is the most valuable tool you have for making your work stronger and more accessible. We don't need to rely on flawed AI when we have the expertise of disabled people ready to lead the way.