Volver a todas las entradas

Oferta para desarrolladores

Prueba la API de ImaginePro con 50 créditos gratis

Crea experiencias visuales con IA usando Midjourney, Flux y más; los créditos gratuitos se renuevan cada mes.

Comenzar prueba gratuita

AI Models Reinforce Gender Pay Gap Study Finds

2025-07-11Siôn Geschwindt3 minutos leídos
AI
Gender Bias
Technology

A startling new study has revealed that popular large language models (LLMs) like ChatGPT are systematically advising women to seek lower salaries than men, even when presented with identical qualifications and experience. This research highlights a critical flaw in AI systems, showing they can perpetuate and even amplify existing societal biases.

Study Uncovers AI's Troubling Salary Advice

The research, spearheaded by Ivan Yamshchikov, a professor of AI and robotics at the Technical University of Würzburg-Schweinfurt (THWS) in Germany, put five leading LLMs to the test. The methodology was simple yet effective: the team created user profiles that were identical in every aspect—education, experience, and job role—with the sole difference being the applicant's gender. The models were then prompted to suggest a target salary for a negotiation.

The results were alarming. In one striking example using OpenAI’s ChatGPT, a prompt for a female applicant yielded a significantly lower salary recommendation than the exact same prompt for a male applicant.

ChatGPT response for female applicant Credit: Ivan Yamshchikov.

ChatGPT response for male applicant Credit: Ivan Yamshchikov.

As Yamshchikov noted, “The difference in the prompts is two letters, the difference in the ‘advice’ is $120K a year.” This pay gap was most severe in fields like law and medicine, followed by business and engineering. Only in the social sciences did the AI offer comparable advice. The study also found this gender-based differentiation extended to career choices and goal-setting, all without any disclaimer from the AI about its inherent bias.

A Familiar Pattern of AI Bias

This is not an isolated incident but part of a troubling trend where AI reflects and reinforces systemic inequities. In 2018, Amazon had to abandon a proprietary hiring tool after it was found to consistently downgrade resumes from female candidates. More recently, a clinical AI model was shown to underdiagnose women and Black patients because it was trained on data sets predominantly featuring white men.

The Path Forward for Ethical AI

The researchers from the THWS study contend that merely applying technical fixes is insufficient to address the root of the problem. They advocate for a more robust framework built on clear ethical standards, mandatory independent review processes, and much greater transparency into how these influential models are trained and deployed.

As generative AI becomes a trusted advisor for everything from mental wellness to career development, the stakes are higher than ever. If left unchecked, the very illusion of AI's objectivity could become its most dangerous and damaging feature.

Leer el post original

Compara planes y precios

Encuentra el plan que se adapte a tu carga de trabajo y desbloquea el acceso completo a ImaginePro.

Comparativa de precios de ImaginePro
PlanPrecioAspectos destacados
Estándar$8 / mes
  • 300 créditos mensuales incluidos
  • Acceso a los modelos Midjourney, Flux y SDXL
  • Derechos de uso comercial
Premium$20 / mes
  • 900 créditos mensuales para equipos en crecimiento
  • Mayor concurrencia y entregas más rápidas
  • Soporte prioritario vía Slack o Telegram

¿Necesitas condiciones personalizadas? Hablemos para ajustar créditos, límites o despliegues.

Ver todos los detalles de precios
ImaginePro newsletter

¡Suscríbete a nuestro boletín!

Suscríbete a nuestro boletín para recibir las últimas noticias y diseños.