Artificial intelligence (AI) is the best way to save time and make fair decisions, right? Not so fast. As AI is more common in our daily lives, we’ve seen it make mistakes and replicate human shortcomings. For many, it came as a surprise when AI hiring algorithms also seemed to replicate human biases. If you are an employer using AI-based hiring algorithms, you risk being held liable under federal law.
The problem
Some companies with a high volume of hires have used AI to help them make employment-related decisions. At first, this only included basic functions such as searching CVs for keywords. However, as technology advanced, companies began to use tools such as computer-recorded video interviews or facial recognition technology to screen candidates.
Here are examples of using AI during the hiring process:
- Resume scanners that prioritize applications using certain keywords;
- Employee monitoring software that rates employees based on their keystrokes or other factors;
- “Virtual assistants” or “chatbots” that ask candidates about their qualifications and reject those who do not meet predefined requirements;
- Video interview software that rates candidates based on their facial expressions and speech patterns; And
- Testing software that provides applicants or employees with “job fit” scores regarding their personality, abilities, cognitive skills, or perceived “cultural fit” based on their performance on a game or test more traditional.
While it’s helpful to have a computer to handle all of these tasks, some companies discontinued the use of AI in hiring decisions when technology emerged to screen candidates based on protected statuses. For example, the AI recruiting software used by Amazon trained itself to find men for technical roles. Additionally, researcher Joy Buolamwini found that facial recognition software failed to recognize women and people of color, which may lead to the software not accurately reflecting the performance of various candidates during of a video interview projected by computer. Additionally, AI hiring tools could unintentionally screen out applications from people with disabilities, even when they could do the job with reasonable accommodation. Depending on how it’s programmed, AI software absorbs collective attitudes and biases from everything it reads online. And without teaching the AI how to identify and mitigate biases, it will likely perpetuate them. However, employers using AI can help prevent perpetuation.
EEOC Guidance on Using AI in Employment-Related Decisions
The EEOC recently released guidance on how employers’ use of AI can comply with the Americans with Disabilities Act (ADA) and Title VII. Employers using AI to make employment decisions should consult EEOC guidelines.
On May 18, 2023, the EEOC released guidance to help employers “determine[ing] whether their tests and selection procedures are lawful for purposes of the disparate Title VII impact analysis. Disparate impact discrimination occurs when an apparently neutral policy or practice has the effect of disproportionately excluding people on the basis of protected status (unless the procedures are employment-related and comply business needs). If an employer administers a screening process, they may be liable under Title VII, even if the test was developed by an outside vendor. The guidelines specify that employers are responsible for selection procedures developed by third-party software vendors.
If you want a software vendor to develop or administer an algorithmic decision support tool, ask them at least if they have taken steps to assess whether the tool has a disparate impact based on a characteristic protected by Title VII. If the tool results in a lower pick rate for people in a particular protected class, then you need to consider whether it is job-related and consistent with business necessity and whether there are alternatives that may have less impact.
On May 12, 2022, the EEOC released AI guidance on “how existing ADA requirements may apply to the use” of AI in employment decision making. It further “offers promising practices for employers to help them become ADA compliant when using AI-based decision-making tools.”
Not surprisingly (and consistent with its May 18, 2023 guidance), the EEOC has concluded that an employer who administers a pre-employment test may be liable for ADA discrimination if the test discriminates against persons with disabilities, even if the test was developed by an external supplier.
Regardless of who developed an algorithmic decision-making tool, the EEOC advises employers to take additional steps during implementation and deployment to reduce the chances of the tool discriminating against a person because of a disability. (intentionally or not). Suggested steps include:
- Clearly state that reasonable accommodations, including alternative formats and alternative testing, are available for people with disabilities;
- Provide clear instructions for requesting reasonable accommodation; And
- Prior to the assessment, provide all job candidates and employees being assessed with as much information as possible about the tool, including information about traits or characteristics that the tool is designed to measure, the methods by which these traits or characteristics are to be measured. be measured, and the disabilities, if any, that could potentially reduce the assessment results or lead to elimination.
State and municipal laws
Additionally, states and municipalities are beginning to fight the use of discriminatory AI hiring tools.
In 2020, Illinois enacted the Artificial Intelligence Video Interview Act. This law requires employers who use AI-based analytics in interview videos to take the following steps:
- Educate each candidate on the use of AI technology.
- Explain the AI technology to the candidate, how it works, and what features it uses to assess candidates.
- Obtain the applicant’s consent before the interview.
The video must be destroyed within 30 days of the candidate’s request, and employers must limit distribution of the videos to only those whose expertise is necessary to assess the candidate.
If the employer relies solely on AI to determine the threshold before the candidate interviews in person, that employer must track the race and ethnicity of candidates who do not interview in person , as well as the candidates eventually hired.
Illinois law does not include explicit civil penalties.
In 2020, Maryland passed its AI employment law, called HB 1202. HB 1202 prohibits employers from using facial recognition technology during an interview to create a facial model without consent. Consent requires a signed waiver stating:
- The applicant’s name;
- The date of the interview;
- That the applicant consents to the use of facial recognition; And
- If the applicant has read the waiver of consent.
Like Illinois law, Maryland law does not include a specific penalty or fine for a violation of the law.
More recently, New York City enacted Local Law Int. No. 1894-A, which requires an independent “bias audit” of AI recruiting tools at least one year before first use. The law also requires that audit information be publicly available and that the company inform applicants that AI hiring algorithms will be used. The price to pay is a penalty of $500 to $1,500 for each violation.
Notably, 1894-A defines an audit as “an unbiased evaluation by an independent auditor” used to test technology for any discriminatory impact based on race, ethnicity, or gender. It is still unclear who is qualified to perform an audit. So far, law firms are stepping in to provide the service. In the event that employers need an audit performed, employers should not hesitate to call their attorney.
Take away food
As with any technology, AI recruiting tools will evolve over time. We remain hopeful that these AI issues will be fixed soon. However, for now, employers using or planning to use AI-based hiring tools must ensure that their use of AI complies with the law. You should:
- Review the EEOC’s guidance for ensuring your RN recruiting tools meet ADA and Title VII requirements. Specifically, employers must ensure that algorithms do not discriminate against individuals based on protected characteristics and disabilities.
- Find out if city or state laws require AI audits, restrictions on facial recognition services, or restrictions on AI analysis of video interviews.
- Make sure third-party AI technology providers know and follow federal, state, and local requirements.