As AI-powered tools continue to reshape the hiring process, many employers are turning to automation to save time and improve efficiency. While these technologies offer potential benefits, they also introduce serious risks—especially when it comes to fairness, compliance, and equal opportunity. 

A recent lawsuit against Workday has brought these concerns into sharper focus. The company is facing a collective action alleging that its AI-driven hiring tools disproportionately screened out applicants based on race, age, and disability status. While the legal outcome remains uncertain, the case serves as an important reminder: AI can unintentionally embed bias into critical employment decisions if not carefully managed. 

Why AI Can Be Risky in Hiring 

AI hiring tools typically rely on algorithms trained on historical data. If that data reflects past patterns of discrimination—or if it overemphasizes certain traits as indicators of success—it can result in biased outcomes, even when protected characteristics like race or age are not explicitly included. 

Other risks include: 

  • Lack of transparency: Many AI systems function as “black boxes,” making it difficult to explain why a candidate was rejected. 
  • Over-reliance on automation: When employers rely too heavily on automated filters, qualified candidates may be overlooked without human oversight. 
  • Legal liability: Employers remain responsible for ensuring that any hiring tool, including those provided by third-party vendors, complies with anti-discrimination laws. 

How Employers Can Reduce the Risk 

To ensure fairness in the hiring process, employers using AI should take proactive steps: 

  1. Audit Tools Regularly
    Conduct independent audits of AI tools to evaluate bias. Make adjustments when patterns of exclusion are identified.
  2. Use Transparent Systems
    Choose vendors that offer explainable AI solutions and provide insight into how candidate data is used and evaluated.
  3. Maintain Human Oversight
    AI should assist—not replace—human decision-making. Final hiring decisions should always include review by a trained professional.
  4. Keep Compliance Top of Mind
    Ensure your hiring processes are aligned with the requirements of the Equal Employment Opportunity Commission (EEOC), Americans with Disabilities Act (ADA), and other relevant laws.
  5. Document and Review
    Maintain clear documentation of how AI tools are selected, configured, and monitored. Regularly review outcomes to identify potential red flags.

AI has the potential to support better hiring decisions, but only when used thoughtfully and responsibly. Employers must remain vigilant and ensure these tools enhance—rather than undermine—efforts to build fair, diverse, and inclusive workplaces. 

Technology is evolving, and so is the legal landscape. Now is the time for employers to take a hard look at the role AI plays in their hiring process—and to prioritize ethical, transparent practices that protect both candidates and the business.