AI Compliance12 min read

AI in Hiring: What's Legal, What's Not, and What's Gray

PersonaScore Team

AI hiring laws are evolving faster than most employers can track. If you use any form of artificial intelligence in your hiring process — resume screening software, chatbot-based candidate intake, automated assessments, or AI-generated interview questions — you are operating in a legal environment that is shifting underneath you. Some things are clearly legal. Some are clearly not. And a surprising amount falls into a gray area where the law has not caught up with the technology.

This guide breaks down the current legal landscape for AI in hiring as of mid-2025, covering federal law, emerging state regulations, and the practical compliance steps every employer should take right now — regardless of company size. This is the first post in our AI & Hiring Law series, where we cover everything from EEOC guidance to state-by-state regulations to audit frameworks.

What AI Hiring Laws Exist Right Now

There is no single federal law in the United States that specifically governs the use of AI in hiring. Instead, employers must navigate a patchwork of existing anti-discrimination laws, new state-level AI regulations, and federal agency guidance that applies AI-specific interpretations to long-standing civil rights frameworks.

The key federal laws that already apply to AI hiring tools include:

  • Title VII of the Civil Rights Act (1964): Prohibits employment discrimination based on race, color, religion, sex, or national origin. If your AI tool produces a disparate impact on any protected group, you face the same liability as if a human made the discriminatory decision.
  • The Americans with Disabilities Act (ADA): Requires reasonable accommodations and prohibits screening out candidates based on disability. AI tools that assess communication style, facial expressions, or response patterns can inadvertently discriminate against candidates with disabilities.
  • The Age Discrimination in Employment Act (ADEA): Protects workers 40 and older. AI trained on historical hiring data may encode age bias if younger candidates were historically preferred.
  • The Equal Pay Act: If your AI recommends salary ranges or compensation and those recommendations reflect gender-based pay disparities in historical data, you have a legal problem.

The critical legal principle here is straightforward: you are responsible for the outcomes of your AI tools, even if you did not build them and do not understand how they work. Using a third-party vendor does not transfer your legal liability.

What Employers Can Legally Do with AI in Hiring

Despite the complexity, there are many legitimate, legally sound ways to use AI in your hiring process. The key is understanding which applications carry low risk and which require careful implementation.

Clearly Legal Applications

  1. Administrative automation. Using AI to schedule interviews, send status updates, manage candidate pipelines, and handle logistics. These applications do not involve evaluative judgments about candidates and carry minimal legal risk.
  2. Job description optimization. AI tools that analyze your job postings for biased language, readability, and inclusivity are well within legal bounds. They help you attract a more diverse applicant pool, which aligns with rather than conflicts with anti-discrimination law.
  3. Structured question generation. Using AI to generate consistent, role-relevant interview questions based on job requirements. As long as every candidate receives the same questions, this actually strengthens your legal position by creating a more structured, defensible process.
  4. Data organization and presentation. AI that organizes candidate information, assessment results, and interview scores into a structured format for human decision-makers. The AI is organizing data, not making decisions.
  5. Candidate sourcing. Using AI to identify potential candidates from public profiles and databases, as long as the sourcing criteria do not function as proxies for protected characteristics.

Legal with Proper Implementation

  1. Resume screening and ranking. AI-powered resume screening is legal, but you must be able to demonstrate that the screening criteria are job-related and that the tool does not produce a disparate impact on protected groups. Regular bias audits are essential.
  2. Skills assessments. AI-administered skills tests are legal if the skills tested are genuinely required for the role and the assessment format does not disadvantage candidates based on protected characteristics.
  3. Personality and behavioral assessments. Legal when validated for the specific use case and when results inform human decisions rather than serving as automatic pass/fail gates. Platforms like PersonaScore are designed with this principle at their core — assessment data informs the interviewer, but a human always makes the hiring decision.

What Is Clearly Not Legal

Some applications of AI in hiring are unambiguously illegal under existing law, regardless of whether new AI-specific legislation has been passed in your jurisdiction.

  1. Using AI to intentionally discriminate against protected classes. This should be obvious, but it extends to instructing AI to filter candidates by age, race, gender, or other protected characteristics, even through proxy variables.
  2. Deploying AI tools that produce proven disparate impact without remediation. If you know or should know that your AI screening tool disproportionately eliminates candidates from a protected group, and the criteria cannot be justified as job-related and consistent with business necessity, continuing to use it is illegal.
  3. Failing to provide ADA accommodations for AI assessments. If your AI assessment requires a timed video response and a candidate requests an accommodation due to a speech disability, you must provide one. “The AI system doesn't allow modifications” is not a legal defense.
  4. Using emotion or facial recognition in hiring without disclosure (in regulated jurisdictions). Illinois, Maryland, and other states have specific laws restricting or requiring consent for facial analysis in hiring contexts.
  5. Failing to disclose AI use where legally required. Several jurisdictions now mandate that employers inform candidates when AI is being used to evaluate them. Operating in those jurisdictions without disclosure is a straightforward violation.

The Gray Areas: Where the Law Has Not Caught Up

The most challenging territory for employers is the gray area — AI applications that are not explicitly prohibited but carry uncertain legal risk. These are the areas where reasonable legal minds disagree and where future legislation or court decisions could shift the landscape dramatically.

Gray Area 1: AI-Generated Candidate Rankings

Using AI to rank candidates from “most qualified” to “least qualified” is common, but the legal status depends on what happens with that ranking. If a human reviews the ranking, considers candidates the AI ranked lower, and makes an independent judgment, you are on relatively solid ground. If the ranking effectively functions as a decision — hiring managers only interview the top five AI-ranked candidates and never see the rest — you are closer to automated decision-making, which carries significantly more risk.

The distinction between “AI organizing information for human review” and “AI making the decision in practice” is one of the most important legal boundaries in AI hiring. Our dedicated post on this topic explores this distinction in depth.

Gray Area 2: Natural Language Processing for Cultural Fit

Some AI tools analyze candidate responses in written applications or chatbot interactions to assess “cultural fit” or “communication style.” The legal risk here is that language patterns correlate strongly with race, ethnicity, national origin, socioeconomic background, and neurodivergence. An AI that penalizes non-standard English usage or particular communication styles may produce disparate impact that is difficult to justify as job-related.

Gray Area 3: Predictive Performance Models

AI tools that claim to predict which candidates will be top performers based on pattern matching against existing employee data operate in legally uncertain territory. The validity of these predictions is often questionable, and the training data may encode exactly the biases that anti-discrimination law is designed to prevent. If your top performers historically share certain demographic characteristics, the AI learns to favor those characteristics — which is a textbook case of disparate impact.

Gray Area 4: Passive Candidate Monitoring

Some AI tools monitor social media, professional networks, and online behavior to build candidate profiles or predict job-seeking intent. While there is no broad federal prohibition on viewing publicly available information, the aggregation of this data by AI raises concerns about privacy, consent, and the potential for discriminatory inferences drawn from online behavior that correlates with protected characteristics.

How to Stay on the Right Side of AI Hiring Law

Given the evolving legal landscape, here is a practical compliance framework that protects your organization regardless of which specific laws apply in your jurisdiction:

1. Maintain Human Decision-Making Authority

The single most important step you can take is ensuring that a human being makes every consequential hiring decision. AI should inform, organize, and recommend — but never decide. This is not just good legal practice; it is the ethical foundation of responsible AI use in hiring. Every candidate who does not get the job should be able to know that a person, not an algorithm, made that call.

2. Conduct Regular Bias Audits

At minimum annually, and ideally quarterly, analyze the outcomes of your AI-assisted hiring process by protected category. Are candidates from any group being screened out, ranked lower, or rejected at disproportionate rates? If so, investigate whether the criteria driving those outcomes are genuinely job-related. Our post on how to audit your AI hiring tools provides a complete framework for this process.

3. Document Everything

Maintain records of what AI tools you use, what they do, what data they access, what decisions they influence, and what human oversight exists at each stage. This documentation serves two purposes: it demonstrates compliance in the event of an audit or legal challenge, and it forces you to think clearly about what your AI is actually doing.

4. Know Your Jurisdiction

If you hire candidates in New York City, Illinois, Colorado, Maryland, or any other jurisdiction with specific AI hiring laws, you must comply with those requirements regardless of where your company is headquartered. This includes disclosure requirements, bias audit mandates, and candidate rights provisions. See our state-by-state AI hiring law guide for specific requirements.

5. Vet Your Vendors

If you use third-party AI hiring tools, ask your vendors direct questions:

  • Has the tool been independently audited for bias? What were the results?
  • What data does the tool use, and has the training data been reviewed for discriminatory patterns?
  • Does the tool comply with applicable state and local AI hiring laws?
  • Can the tool provide audit trails showing how individual candidates were evaluated?
  • What accommodations does the tool support for candidates with disabilities?

If a vendor cannot or will not answer these questions, that is a significant red flag.

6. Disclose AI Use to Candidates

Even in jurisdictions that do not yet require disclosure, telling candidates when AI is part of your evaluation process is a best practice that builds trust and positions you ahead of coming regulations. A simple statement in your application process — “We use AI-assisted tools to help organize candidate information and generate interview questions. All hiring decisions are made by our team members.” — is sufficient and honest.

The Compliance Checklist

Use this checklist to evaluate your current AI hiring practices:

  1. You can identify every AI tool used in your hiring process and describe what each one does.
  2. A human makes the final decision on every hire, with the authority to override AI recommendations.
  3. You have conducted a bias audit on your AI tools within the last 12 months.
  4. Candidates are informed when AI is used to evaluate them (required in some jurisdictions, best practice everywhere).
  5. You have a process for candidates to request accommodations for AI-administered assessments.
  6. Your AI tools' criteria are documented and can be shown to be job-related.
  7. You maintain records of AI-assisted hiring outcomes by demographic category.
  8. Your AI vendor contracts address liability, data handling, and compliance obligations.
  9. You have reviewed the AI hiring laws in every jurisdiction where you hire candidates.
  10. Your hiring managers understand what the AI tools do and what role they play in the process.

What Is Coming Next

The legal landscape for AI in hiring is accelerating. The European Union's AI Act classifies AI hiring tools as “high risk,” requiring conformity assessments and ongoing monitoring. In the United States, at least a dozen states have introduced or are drafting AI hiring legislation. The EEOC has issued guidance specifically addressing AI and algorithmic decision-making in employment, with enforcement actions increasingly targeting AI-driven disparate impact.

The direction is clear: more regulation, more transparency requirements, and more employer accountability. Organizations that build compliant practices now will be ahead of the curve. Those that wait for enforcement actions to force change will face the dual cost of remediation and legal exposure.

The Bottom Line

AI in hiring is legal. Discrimination in hiring is not. The technology you use does not change the legal standard — it only changes the mechanism by which violations can occur. Employers who use AI responsibly, with human oversight, regular auditing, proper disclosure, and genuine job-relatedness in their criteria, can leverage the technology's genuine benefits while staying on the right side of the law.

The employers who will face problems are the ones who deploy AI as a black box, trust vendor claims without verification, and remove human judgment from consequential decisions. Do not be one of them.

For more on how the EEOC specifically approaches AI in hiring, read the next post in this series: EEOC Guidelines for AI Hiring Tools: A Plain English Breakdown. And for a broader look at what AI can and cannot do well in hiring, see our guide on AI in Hiring: What Actually Works vs What's Just Hype.

Ready to put this into practice?

PersonaScore turns personality data into structured hiring decisions. Start your free trial today.

Start Free Trial