The Perils of AI in Workplace Decision Making
Artificial intelligence is no longer just for automating simple tasks or analyzing data. It has become a key tool for managers making critical decisions. Recent studies show that over 60% of managers are now using AI for hiring, firing, layoffs, and promotions. Shockingly, more than one in five of these managers allow AI to make these calls with zero human input. This blind reliance on technology is creating a minefield of legal risks for companies.
While using AI to streamline employment decisions is tempting, it's crucial to understand its limitations. AI output is only as good as the data it receives. These systems can't grasp context, lack empathy, and risk producing biased outcomes that can lead to widespread, unintentional discrimination.
Cautionary Tales When AI Gets It Wrong
History is already filled with examples of AI-driven HR strategies going awry.
In 2014, Amazon tried to automate its hiring process, only to find that its new system was systematically rejecting female applicants. The algorithm was trained on ten years of hiring data from a male-dominated tech industry, so it learned to prefer male resumes and penalize any that mentioned "women" or all-female colleges. If Amazon hadn't had humans review the system's outputs, they would have illegally filtered out countless qualified women and faced serious discrimination claims.
As AI has become more common, government bodies like the Equal Employment Opportunity Commission (EEOC) and the Department of Labor (DOL) have taken notice. They recognize that AI tools can violate numerous anti-discrimination laws. For instance, the HR company Workday, Inc. is facing a lawsuit alleging its AI recommendation engine discriminated against applicants based on race, age, and disability. In another case, the EEOC sued iTutorGroup for using software that automatically rejected female applicants over 55 and male applicants over 60.
Best Practices for Using AI in HR
Even if an AI is told to ignore demographics like race or gender, it can still produce biased results by picking up on correlated data. To avoid the legal pitfalls, companies should implement the following best practices.
-
Adopt Clear AI Governance and Policies: Establish formal policies that define acceptable and prohibited uses of AI. Your policy should address confidentiality, bias mitigation, and transparency. Key questions to consider are:
- What is our company's overall approach to AI governance?
- How will we prevent negative or unintended consequences from AI?
- How will we mitigate the risk of AI misuse by employees?
- How do we ensure human accountability and proper employee training?
- How will we monitor AI systems, especially self-learning models, over time?
-
Implement Formal AI Training for Managers: A staggering two-thirds of managers using AI have received no formal training. It is essential to train managers on how to use these systems effectively and non-discriminatorily.
-
Validate Models and Conduct Regular Audits: Understand the algorithms and data used by any AI tool, even if it's from a third-party vendor. Your company can be held liable for a third-party tool's discriminatory output. Regular audits are critical to spot and correct any biases that emerge as the model learns and evolves.
-
Ensure Meaningful Human Oversight: AI should be a tool, not the final decision-maker. Humans must remain in the loop to provide context, prevent unintended bias, and uphold company values.
-
Require Disclosure on AI Use: Be transparent about how and when you use AI. This practice not only reduces legal risk but also builds trust with employees, clients, and customers.
-
Monitor Evolving Laws: The legal landscape for AI is changing rapidly. While the federal approach has varied, several states have enacted their own strict AI laws. Jurisdictions like Colorado, Illinois, Maryland, Utah, and New York now have specific compliance requirements, such as bias audits and candidate notifications.
Key Takeaways for Employers
As AI continues to reshape the workplace, staying informed about the legal and ethical risks is paramount. To protect your organization, develop a strong AI governance plan, provide thorough employee training, and educate managers on the hidden biases that can undermine even the most seemingly objective automated systems.