AI Compliance13 min read

EEOC Guidelines for AI Hiring Tools: A Plain English Breakdown

PersonaScore Team

The EEOC has made its position on AI hiring tools increasingly clear over the past several years, but most of what they have published is written in legal and regulatory language that is hard to translate into practical action. If you are an employer using AI in your hiring process — or considering it — you need to understand what the EEOC has actually said, what it means for your day-to-day operations, and where enforcement is heading. This is the second post in our AI & Hiring Law series, where we break down the legal landscape for AI-powered hiring tools.

What the EEOC Has Actually Said About AI in Hiring

The EEOC has not created a new legal framework for AI. Instead, it has issued guidance explaining how existing anti-discrimination law — primarily Title VII of the Civil Rights Act and the Americans with Disabilities Act — applies to algorithmic and AI-driven hiring tools. The core message is deceptively simple: the same rules that apply to human hiring decisions apply to AI-assisted hiring decisions.

The EEOC's key publications on this topic include:

  • “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees” (May 2022): This technical assistance document explains how the ADA applies when employers use AI to evaluate candidates with disabilities.
  • “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures” (May 2023): This guidance specifically addresses how the four-fifths rule and disparate impact analysis apply to AI hiring tools.
  • The EEOC's Strategic Enforcement Plan (2024-2028): Identifies AI and algorithmic fairness in employment as a priority enforcement area, signaling that the agency is actively looking for cases to bring.

Disparate Impact: The Legal Concept Every Employer Must Understand

Disparate impact is the legal doctrine at the heart of nearly every EEOC concern about AI in hiring. Understanding it is non-negotiable for any employer using AI tools.

Disparate impact occurs when a facially neutral policy or practice disproportionately affects members of a protected group, even if there was no intent to discriminate. Intent is irrelevant. If your AI resume screener rejects 60% of female applicants but only 30% of male applicants for the same role, you have a potential disparate impact problem — regardless of whether the AI was designed to consider gender.

The EEOC uses the four-fifths (or 80%) rule as a practical threshold for identifying potential disparate impact. Here is how it works:

  1. Calculate the selection rate for each group. If 50 out of 100 male applicants pass your AI screen (50% selection rate) and 30 out of 100 female applicants pass (30% selection rate), those are your numbers.
  2. Divide the lower rate by the higher rate. In this example: 30% / 50% = 0.60, or 60%.
  3. If the result is less than 80% (four-fifths), the selection procedure has a potential disparate impact.

In the example above, the 60% ratio is below the 80% threshold, which means the EEOC would consider this evidence of potential discrimination. The employer would then need to demonstrate that the screening criteria are job-related and consistent with business necessity.

How the ADA Applies to AI Hiring Tools

The EEOC's ADA guidance on AI is particularly important because AI tools can inadvertently screen out candidates with disabilities in ways that are not immediately obvious.

The guidance identifies three key risk areas:

1. AI That Functions as a Medical Examination

Under the ADA, employers cannot require medical examinations before making a conditional job offer. Some AI assessments — particularly those that analyze speech patterns, facial expressions, or emotional responses — may be collecting information that is closely tied to physical or mental health conditions. If an AI assessment effectively reveals whether a candidate has a disability, it may cross the line into a prohibited pre-offer medical examination.

2. AI That Screens Out Candidates with Disabilities

An AI tool might penalize candidates who speak slowly (potentially screening out people with speech disabilities), who take longer to complete timed assessments (potentially screening out people with learning disabilities), or who do not make eye contact in video interviews (potentially screening out people on the autism spectrum). Even if the AI does not “know” about the disability, the outcome — systematically disadvantaging people with certain disabilities — creates liability.

3. Failure to Provide Reasonable Accommodations

Employers must provide reasonable accommodations for AI-administered assessments, just as they would for any other part of the hiring process. If your AI assessment is a timed typing test and a candidate with a hand injury requests additional time, you must provide it. If your AI chatbot interview requires verbal responses and a candidate with a hearing impairment requests a text-based alternative, you must provide it.

The EEOC has specifically stated that “we were not able to modify the software” is not an acceptable reason for failing to accommodate. If your AI vendor's tool cannot support reasonable accommodations, you need a different tool or a manual alternative for candidates who request accommodations.

What “Job-Related and Consistent with Business Necessity” Means in Practice

When an AI tool produces a disparate impact, the employer's primary legal defense is demonstrating that the tool's criteria are job-related and consistent with business necessity. This sounds straightforward, but in practice it requires rigorous documentation.

To meet this standard, you need to show:

  1. The criteria measured by the AI are actually required for the job. If your AI screens for a bachelor's degree and the role can be performed successfully without one, the requirement is not job-related. If your AI penalizes gaps in employment history and employment gaps have no correlation with job performance, the criterion is not job-related.
  2. The AI measures those criteria accurately. The tool must be validated — meaning there is evidence that what it measures actually predicts performance in the specific role. A general personality test that has been validated for customer service roles does not automatically meet this standard when applied to engineering roles.
  3. There is no less discriminatory alternative available. Even if your criteria are job-related and your tool measures them accurately, if there is a different approach that would achieve the same business purpose with less disparate impact, you are expected to use it.

Real Enforcement Examples

The EEOC has moved beyond guidance and into active enforcement. Here are examples that illustrate how these principles play out in practice:

EEOC v. iTutorGroup (2023)

The EEOC sued iTutorGroup for age discrimination after the company's AI-powered hiring software automatically rejected applicants over certain ages. The case resulted in a $365,000 settlement. The key takeaway: an AI tool that uses age as a screening criterion — even indirectly through proxy variables — violates the ADEA just as clearly as a human recruiter throwing out resumes from older candidates.

EEOC Conciliation of Automated Resume Screening

In multiple conciliation agreements (which are settled before litigation and therefore receive less public attention), the EEOC has addressed cases where automated resume screening produced significant disparate impact by race and gender. In these cases, employers were required to conduct bias audits, revise their screening criteria, and implement ongoing monitoring.

EEOC Commissioner Statements on AI Enforcement

EEOC commissioners have publicly stated that the agency views AI discrimination cases as a strategic priority. Commissioner Keith Sonderling has repeatedly emphasized that employers cannot “hide behind the algorithm” and that the agency will hold employers accountable for the outcomes of their AI tools, not just their intentions.

The Uniform Guidelines on Employee Selection Procedures

One document that does not get enough attention in AI hiring discussions is the Uniform Guidelines on Employee Selection Procedures, jointly adopted by the EEOC, the Department of Labor, the Department of Justice, and the Office of Personnel Management. Originally published in 1978, these guidelines establish the framework for validating employment selection procedures — and AI tools are employment selection procedures.

Under the Uniform Guidelines, any selection procedure that has an adverse impact on a protected group must be validated through one of three methods:

  1. Criterion-related validity: Statistical evidence that the tool predicts successful job performance (e.g., scores on the AI assessment correlate with supervisor performance ratings).
  2. Content validity: Evidence that the tool measures a representative sample of the knowledge, skills, or abilities required for the job (e.g., a coding test for a software developer role).
  3. Construct validity: Evidence that the tool measures a psychological construct (e.g., conscientiousness) that has been shown to be important for job performance.

Many AI hiring vendors have not conducted this level of validation. As an employer, you should be asking your vendors directly: “Has this tool been validated under the Uniform Guidelines?” If the answer is no, you are carrying unmitigated legal risk.

How to Comply: A Practical Framework

Based on the EEOC's guidance, here is what employers should do right now:

Step 1: Inventory Your AI Tools

List every AI tool used in your hiring process. Include tools you may not think of as “AI” — algorithmic resume ranking in your ATS, automated screening questions, chatbot-based candidate intake, and any third-party assessment platform. For each tool, document what data it uses, what decisions it influences, and what human oversight exists.

Step 2: Analyze Outcomes by Protected Category

For each AI tool that influences hiring decisions, calculate selection rates by race, sex, age, and disability status (to the extent this data is available). Apply the four-fifths rule. If any group's selection rate is less than 80% of the highest group's rate, investigate further.

Step 3: Validate Job-Relatedness

For any tool that shows potential disparate impact, document the business necessity for each criterion the tool evaluates. Can you demonstrate that each factor the AI considers is required for successful job performance? If not, revise the criteria.

Step 4: Ensure Accommodation Processes

Establish a clear process for candidates to request accommodations for AI-administered assessments. This should be communicated proactively to all candidates before they encounter the AI tool, not buried in fine print.

Step 5: Maintain a Human Decision-Maker

Ensure that no AI tool functions as the sole decision-maker at any stage of your hiring process. A human must review AI recommendations, have the authority to override them, and make the final hiring decision. Tools like PersonaScore are built around this principle: AI organizes candidate data and generates insights, but the hiring decision belongs to the human team.

Step 6: Document and Monitor Continuously

Compliance is not a one-time exercise. Establish a regular cadence — at minimum annually — for reviewing AI tool outcomes, updating validation documentation, and assessing whether your tools still meet the job-relatedness standard as roles evolve. For a detailed audit methodology, see our guide on how to audit your AI hiring tools for compliance.

Common Misconceptions About EEOC and AI

Several myths persist in employer discussions about EEOC and AI hiring tools. Here are the most dangerous ones:

  • “Our vendor says the tool is EEOC compliant, so we are covered.” There is no such thing as EEOC certification or approval of AI tools. The EEOC does not pre-approve hiring tools. Vendor compliance claims should be verified, not taken at face value. And regardless of what the vendor says, the employer is the legally responsible party.
  • “We do not collect demographic data, so we cannot have a disparate impact problem.” Disparate impact exists in outcomes, not in data collection. Whether or not you track demographics, the EEOC can analyze your hiring data and identify disparate impact. Not collecting the data makes it harder for you to detect problems early — it does not protect you from liability.
  • “The AI is objective, so it cannot discriminate.” AI learns from historical data. If that data reflects human biases — and historical hiring data invariably does — the AI will replicate those biases at scale. Objectivity in processing does not equal fairness in outcomes.
  • “We only use AI for the first screen; humans make the real decisions.” If the AI screen eliminates candidates before a human ever sees them, the AI is making a consequential decision. The EEOC applies the same legal standard to every stage of the hiring process, including the initial screen.

What Is Coming from the EEOC

The EEOC has signaled that additional AI-specific guidance and enforcement are forthcoming. Based on commissioner statements, strategic planning documents, and the agency's track record, employers should expect:

  • More enforcement actions targeting AI-driven disparate impact, particularly in high-volume hiring contexts (retail, hospitality, logistics)
  • Expanded guidance on what constitutes adequate human oversight of AI hiring tools
  • Greater scrutiny of AI vendor claims, potentially including requirements for independent validation
  • Coordination with state and local agencies enforcing AI-specific hiring laws

The Bottom Line

The EEOC's position on AI in hiring is not radical or unprecedented. It is the straightforward application of long-established anti-discrimination principles to a new technology. The legal standard has not changed: employers must ensure that their hiring practices — including AI-assisted ones — do not discriminate against protected groups, are job-related and consistent with business necessity, and include reasonable accommodations for candidates with disabilities.

What has changed is the scale at which AI can produce discriminatory outcomes and the speed at which those outcomes accumulate. A biased human recruiter might reject a few dozen candidates unfairly over the course of a year. A biased AI tool can reject thousands in a week. That scale is exactly why the EEOC is paying attention.

Next in this series: AI for Organization vs AI for Decisions: Where to Draw the Line in Hiring. For a broader overview of the legal landscape, see the first post: AI in Hiring: What's Legal, What's Not, and What's Gray.

Ready to put this into practice?

PersonaScore turns personality data into structured hiring decisions. Start your free trial today.

Start Free Trial