State-by-State AI Hiring Laws: NYC, Illinois, Colorado, and Beyond
While federal agencies like the EEOC provide guidance on AI in hiring, the most specific and enforceable AI hiring laws are coming from states and cities. NYC Local Law 144, the Illinois Artificial Intelligence Video Interview Act, and the Colorado AI Act each impose distinct requirements on employers using AI in recruitment — and the list of jurisdictions with their own rules is growing rapidly. If you hire candidates in multiple states, understanding this patchwork of state AI hiring regulations is not optional. It is a compliance imperative.
This is the fourth post in our AI & Hiring Law series. For the federal perspective, see our posts on EEOC guidelines and AI organization vs decision-making.
NYC Local Law 144: The First Major AI Hiring Law
New York City's Local Law 144, which took effect on July 5, 2023, is the first law in the United States to specifically regulate automated employment decision tools (AEDTs). It applies to any employer or employment agency that uses an AEDT to screen candidates or employees for hiring or promotion within New York City.
What Qualifies as an AEDT
The law defines an AEDT as any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues a simplified output — including a score, classification, or recommendation — that is used to substantially assist or replace discretionary decision-making for employment decisions. This is a broad definition that captures most AI hiring tools, including:
- AI-powered resume screening tools that score or rank candidates
- Chatbot-based candidate assessments that generate a qualification score
- Video interview platforms that analyze candidate responses and produce ratings
- Algorithmic tools within your ATS that rank applicants for recruiter review
Tools that simply organize candidate information without generating a score or recommendation may not meet the AEDT definition, but the line is narrow. If you are uncertain whether your tool qualifies, assume it does and comply accordingly.
Requirements Under Local Law 144
- Independent bias audit. Before using an AEDT, the employer must have the tool independently audited for bias. The audit must be conducted by an independent auditor (not the vendor) and must calculate selection rates and impact ratios for race/ ethnicity and sex categories. The audit must be completed no more than one year before the use of the AEDT.
- Public disclosure of audit results. The employer must publish a summary of the most recent bias audit results on their website. This includes the date of the audit, the source and explanation of the data used, the number of individuals assessed, and the selection rates and impact ratios by category.
- Candidate notice. Employers must notify candidates at least 10 business days before using an AEDT. The notice must state that an AEDT will be used, describe the job qualifications and characteristics the AEDT will assess, and explain how candidates can request an alternative selection process or a reasonable accommodation.
- Data disclosure. Employers must inform candidates of the type of data collected by the AEDT, the data's source, and the employer's data retention policy.
Penalties and Enforcement
Violations carry civil penalties of $500 for the first violation and $500 to $1,500 for each subsequent violation. Each day of non-compliant AEDT use and each candidate who is not properly notified constitutes a separate violation. For a company processing hundreds of applications, penalties can accumulate rapidly.
NYC Compliance Checklist
- Identify all tools in your hiring process that meet the AEDT definition
- Engage an independent auditor to conduct a bias audit annually
- Publish audit results on your careers page or company website
- Update your application process to notify candidates at least 10 business days before AEDT use
- Create a data collection disclosure that describes what data the AEDT uses and your retention policy
- Establish a process for candidates to request an alternative selection process
Illinois AI Hiring Laws
Illinois has two laws affecting AI in hiring, making it one of the most regulated states for AI recruitment tools.
The Artificial Intelligence Video Interview Act (AIVICA, 2020)
Illinois was the first state to pass a law specifically addressing AI in hiring. The AIVICA applies to employers that use AI to analyze video interviews of candidates for positions based in Illinois.
Requirements include:
- Disclosure. Employers must notify candidates that AI will be used to analyze their video interview and explain how the AI works and what characteristics it evaluates.
- Consent. Candidates must provide written consent before the AI analysis. Without consent, the employer cannot use AI to evaluate the video interview.
- Deletion upon request. If a candidate requests that their video be deleted, the employer must do so within 30 days, and must also instruct any third parties with access to the video to delete it.
- Limited sharing. Video interviews submitted by candidates may only be shared with people whose expertise is necessary to evaluate the candidate.
Illinois Human Rights Act Amendment (2026)
Illinois amended its Human Rights Act to specifically address AI in employment decisions more broadly. The amendment makes it a civil rights violation to use AI that has a discriminatory effect on protected classes in hiring, recruitment, promotion, discharge, or the terms and conditions of employment. This extends beyond video interviews to cover any AI tool used in employment decisions, including resume screening, chatbot assessments, and automated scoring systems.
The practical effect: Illinois employers using any AI hiring tool must be able to demonstrate that the tool does not produce a discriminatory impact, or that any disparate impact is justified by business necessity — echoing the federal standard but with state-level enforcement that can be more aggressive.
Illinois Compliance Checklist
- If using AI video interview analysis: provide written disclosure and obtain written consent from every candidate
- Establish a video deletion process to comply within 30 days of candidate request
- For all AI hiring tools: conduct bias testing to check for discriminatory impact
- Document the business necessity justification for any AI tool criteria that produce disparate impact
- Restrict video sharing to only those with evaluation expertise
Colorado AI Act
The Colorado AI Act, signed in 2024, takes effect on February 1, 2026, and represents the most comprehensive state-level AI regulation in the United States. It applies to “high-risk AI systems,” which explicitly includes AI used in employment and hiring decisions.
Key Provisions
- Duty to avoid algorithmic discrimination. Deployers (employers who use AI tools) have an affirmative duty to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. This is a higher standard than simply “do not discriminate” — it requires proactive risk management.
- Risk management program. Employers using high-risk AI systems must implement a risk management policy and program that identifies known or reasonably foreseeable risks of algorithmic discrimination and documents the steps taken to mitigate those risks.
- Impact assessments. Before deploying a high-risk AI system and annually thereafter, employers must complete an impact assessment that documents the purpose of the AI system, its intended benefits and known risks, the data used, the outputs generated, how the system has been evaluated for discrimination, and what mitigation steps have been taken.
- Transparency requirements. Employers must provide notice to candidates that an AI system is being used, a description of the system's purpose, the types of data processed, the nature of the output, and information about how to contest the AI's decision.
- Right to opt out. In certain circumstances, candidates may have the right to opt out of AI processing or request human review of AI-informed decisions.
Colorado Compliance Checklist
- Classify all AI hiring tools as high-risk AI systems under the Act
- Develop a written risk management policy addressing algorithmic discrimination
- Complete an impact assessment for each AI hiring tool before deployment
- Update impact assessments annually or when significant changes are made to the AI tool
- Implement candidate-facing transparency notices describing AI use, data processing, and appeal rights
- Establish a process for candidates to request human review of AI-influenced decisions
- Retain impact assessments and related documentation for the life of the AI system plus three years
Maryland: Facial Recognition Restrictions
Maryland's law, effective since 2020, is narrower than the others but important for employers using video interview technology. The law prohibits employers from using facial recognition technology during a job interview unless the candidate provides a signed consent form. “Facial recognition” includes any automated process that identifies or verifies a person based on their face.
This applies specifically to the interview stage and to facial recognition technology, not to AI more broadly. But employers using video interview platforms should verify whether their platform uses any form of facial analysis, even if it is described as “sentiment analysis” or “engagement scoring” rather than “facial recognition.”
Other States with AI Hiring Legislation
Beyond the major laws above, several other states have enacted or are actively considering AI hiring regulations:
California
California has not yet passed a comprehensive AI hiring law, but the California Privacy Rights Act (CPRA) gives consumers the right to opt out of automated decision-making, which can include AI-driven hiring decisions. The California Civil Rights Department has also issued guidance on how the Fair Employment and Housing Act applies to AI hiring tools. Multiple AI-specific bills have been introduced in the legislature, and comprehensive legislation is expected.
New Jersey
New Jersey introduced legislation requiring employers to notify candidates of AI use in hiring, obtain consent before AI evaluation, and provide explanations for AI-influenced rejections. The bill would also require bias testing and reporting.
Vermont
Vermont's proposed AI legislation includes provisions for transparency in automated decision-making, including employment decisions, with requirements for impact assessments and public reporting.
Connecticut
Connecticut passed an AI transparency act requiring employers to inform candidates when AI is used in hiring and to provide information about what data the AI collects and how it is used.
Washington State
Washington has introduced multiple bills addressing AI in employment, including requirements for bias auditing, candidate notification, and impact assessments similar to Colorado's approach.
How to Comply When You Hire Across Multiple States
The patchwork of state laws creates a practical challenge for employers who hire candidates in multiple jurisdictions. Here is a framework for managing multi-state compliance:
Option 1: Comply with the Strictest Standard Everywhere
The simplest approach is to identify the most demanding set of requirements across all jurisdictions where you hire and apply those requirements universally. If you comply with Colorado's impact assessment requirements, NYC's bias audit mandate, and Illinois's consent provisions for all candidates everywhere, you will meet or exceed the requirements in every jurisdiction.
This approach has the advantage of operational simplicity — one set of procedures, applied uniformly. The disadvantage is that it imposes the highest compliance burden on every hire, even in states with no AI-specific requirements.
Option 2: Jurisdiction-Specific Compliance
The alternative is to maintain jurisdiction-specific procedures, applying the relevant requirements based on where the candidate is located or where the position is based. This reduces the compliance burden for hires in less-regulated states but requires careful tracking of candidate locations and accurate mapping of which requirements apply.
For most employers, Option 1 is the better choice. The cost of over-complying is low (a few extra disclosures and audit steps), while the cost of under-complying in a specific jurisdiction can be significant. And as more states pass AI hiring laws, the difference between the two approaches narrows.
The Universal Compliance Framework
Regardless of which option you choose, these practices will keep you compliant in every current and foreseeable jurisdiction:
- Notify all candidates that AI is used in your hiring process. Do this before the AI evaluates them. Include what the AI does, what data it uses, and how they can request an alternative process or accommodation.
- Conduct an annual independent bias audit of every AI hiring tool. Publish the results. This satisfies NYC's mandate and positions you well for every other jurisdiction's requirements.
- Complete an impact assessment for each AI tool. Document the tool's purpose, data inputs, outputs, known risks, and mitigation steps. Update annually.
- Maintain a written risk management policy for AI in hiring. This should describe your principles for AI use, your commitment to human decision-making, your bias monitoring procedures, and your candidate rights provisions.
- Ensure human decision-making authority at every consequential stage. No automatic rejections, no automatic offers. Every candidate who does not advance should have been reviewed by a person. Tools like PersonaScore are designed around this principle, using AI to organize and inform while keeping humans in control of decisions.
- Retain documentation. Keep bias audit results, impact assessments, candidate notices, and AI tool contracts for at least three years after the AI tool is last used.
What Is Coming Next
The trend is unmistakable: more states will pass AI hiring laws, and those laws will get more specific and more demanding. Based on legislative activity as of mid-2025:
- At least 15 states have introduced or are actively drafting AI hiring legislation
- Federal legislation specifically addressing AI in employment decisions remains a possibility, though the timeline is uncertain
- The EU AI Act, which classifies AI hiring tools as high-risk and imposes strict requirements, will influence U.S. state legislation as lawmakers look to European models
- Enforcement of existing laws is accelerating, with NYC issuing fines and the EEOC pursuing AI discrimination cases
Employers who build compliant practices now will be positioned to absorb new requirements with minimal disruption. Those who wait for their specific state to act will face the scramble of emergency compliance under tight deadlines.
The Bottom Line
State and local AI hiring laws are the front line of AI regulation in the United States. They are more specific, more prescriptive, and more actively enforced than federal guidance. If you use AI anywhere in your hiring process, you need to know the laws in every jurisdiction where you recruit candidates — and you need a compliance framework that can adapt as new laws take effect.
The good news is that the core requirements are consistent across jurisdictions: be transparent with candidates, audit your tools for bias, keep humans in the decision-making loop, and document everything. If you do those four things, you are well-positioned for whatever comes next.
The final post in the AI & Hiring Law series covers the practical mechanics: How to Audit Your AI Hiring Tools for Compliance. For the broader legal context, start with AI in Hiring: What's Legal, What's Not, and What's Gray and our guide to what actually works in AI hiring.