Back to all posts

AI Models Reinforce Gender Pay Gap Study Finds

2025-07-11Siôn Geschwindt3 minutes read
AI
Gender Bias
Technology

A startling new study has revealed that popular large language models (LLMs) like ChatGPT are systematically advising women to seek lower salaries than men, even when presented with identical qualifications and experience. This research highlights a critical flaw in AI systems, showing they can perpetuate and even amplify existing societal biases.

Study Uncovers AI's Troubling Salary Advice

The research, spearheaded by Ivan Yamshchikov, a professor of AI and robotics at the Technical University of Würzburg-Schweinfurt (THWS) in Germany, put five leading LLMs to the test. The methodology was simple yet effective: the team created user profiles that were identical in every aspect—education, experience, and job role—with the sole difference being the applicant's gender. The models were then prompted to suggest a target salary for a negotiation.

The results were alarming. In one striking example using OpenAI’s ChatGPT, a prompt for a female applicant yielded a significantly lower salary recommendation than the exact same prompt for a male applicant.

ChatGPT response for female applicant Credit: Ivan Yamshchikov.

ChatGPT response for male applicant Credit: Ivan Yamshchikov.

As Yamshchikov noted, “The difference in the prompts is two letters, the difference in the ‘advice’ is $120K a year.” This pay gap was most severe in fields like law and medicine, followed by business and engineering. Only in the social sciences did the AI offer comparable advice. The study also found this gender-based differentiation extended to career choices and goal-setting, all without any disclaimer from the AI about its inherent bias.

A Familiar Pattern of AI Bias

This is not an isolated incident but part of a troubling trend where AI reflects and reinforces systemic inequities. In 2018, Amazon had to abandon a proprietary hiring tool after it was found to consistently downgrade resumes from female candidates. More recently, a clinical AI model was shown to underdiagnose women and Black patients because it was trained on data sets predominantly featuring white men.

The Path Forward for Ethical AI

The researchers from the THWS study contend that merely applying technical fixes is insufficient to address the root of the problem. They advocate for a more robust framework built on clear ethical standards, mandatory independent review processes, and much greater transparency into how these influential models are trained and deployed.

As generative AI becomes a trusted advisor for everything from mental wellness to career development, the stakes are higher than ever. If left unchecked, the very illusion of AI's objectivity could become its most dangerous and damaging feature.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.