すべての記事に戻る

開発者向けオファー

ImaginePro APIを50クレジット無料で体験

MidjourneyやFluxなどを活用してAIビジュアルを構築 — 無料クレジットは毎月リセットされます。

無料トライアルを開始

AI Models Reinforce Gender Pay Gap Study Finds

2025-07-11Siôn Geschwindt3 分で読む
AI
Gender Bias
Technology

A startling new study has revealed that popular large language models (LLMs) like ChatGPT are systematically advising women to seek lower salaries than men, even when presented with identical qualifications and experience. This research highlights a critical flaw in AI systems, showing they can perpetuate and even amplify existing societal biases.

Study Uncovers AI's Troubling Salary Advice

The research, spearheaded by Ivan Yamshchikov, a professor of AI and robotics at the Technical University of Würzburg-Schweinfurt (THWS) in Germany, put five leading LLMs to the test. The methodology was simple yet effective: the team created user profiles that were identical in every aspect—education, experience, and job role—with the sole difference being the applicant's gender. The models were then prompted to suggest a target salary for a negotiation.

The results were alarming. In one striking example using OpenAI’s ChatGPT, a prompt for a female applicant yielded a significantly lower salary recommendation than the exact same prompt for a male applicant.

ChatGPT response for female applicant Credit: Ivan Yamshchikov.

ChatGPT response for male applicant Credit: Ivan Yamshchikov.

As Yamshchikov noted, “The difference in the prompts is two letters, the difference in the ‘advice’ is $120K a year.” This pay gap was most severe in fields like law and medicine, followed by business and engineering. Only in the social sciences did the AI offer comparable advice. The study also found this gender-based differentiation extended to career choices and goal-setting, all without any disclaimer from the AI about its inherent bias.

A Familiar Pattern of AI Bias

This is not an isolated incident but part of a troubling trend where AI reflects and reinforces systemic inequities. In 2018, Amazon had to abandon a proprietary hiring tool after it was found to consistently downgrade resumes from female candidates. More recently, a clinical AI model was shown to underdiagnose women and Black patients because it was trained on data sets predominantly featuring white men.

The Path Forward for Ethical AI

The researchers from the THWS study contend that merely applying technical fixes is insufficient to address the root of the problem. They advocate for a more robust framework built on clear ethical standards, mandatory independent review processes, and much greater transparency into how these influential models are trained and deployed.

As generative AI becomes a trusted advisor for everything from mental wellness to career development, the stakes are higher than ever. If left unchecked, the very illusion of AI's objectivity could become its most dangerous and damaging feature.

元の記事を読む

プランと料金を比較

ワークロードに合ったプランを選び、ImagineProの全機能を解放しましょう。

ImaginePro料金比較
プラン料金主なポイント
スタンダード$8 / 月
  • 毎月300クレジットを付与
  • Midjourney・Flux・SDXLモデルにアクセス
  • 商用利用権を含む
プレミアム$20 / 月
  • 成長チーム向けに毎月900クレジット
  • 高い同時実行とより高速な納品
  • Slack/Telegramでの優先サポート

個別条件が必要ですか?クレジットやレート制限、導入方法を柔軟にご相談ください。

料金の詳細を見る
ImaginePro newsletter

ニュースレターを購読してください!

最新ニュースとデザインを入手するために、ニュースレターを購読してください。