You are currently viewing Disability bias should be addressed in AI rules, advocates say

Disability bias should be addressed in AI rules, advocates say

Disabled workers are looking to federal regulators to crack down on artificial intelligence tools that could potentially bias against them.

At a recent American Bar Association event, the chairman of the US Equal Employment Opportunity Commission Charlotte Burrows said she was particularly interested in guidance that could protect people with disabilities from bias in AI tools. According to Burrows, up to 83% of employers and up to 90% of Fortune 500 companies use some form of automated tools to screen or rank job candidates.

The problem is that games or AI-powered personality tests used for hiring or performance reviews can be more difficult for people with intellectual disabilities, for example. Artificial intelligence software that tracks a candidate’s speech or body language during their interview could also create a bias against people with speech impairments, people with visible disabilities, or those whose disabilities affect their movement. .

“This is an area I’ve identified where it might be useful for us to provide assistance through guidance,” Burrows said of the impact of AI tools on people with disabilities.

The EEOC, which enforces federal anti-discrimination laws in the workplace, announced in October that it would study how employers use AI to hire, promote and fire workers. The last time the commission officially ruled on hiring tools was in 1978.

Among other things, these guidelines establish a “rule of four-fifths,” which examines whether a hiring test has a lower selection rate of 80% for protected groups compared to others.

“I’m not someone who believes that because they’re from 1978 we have to throw them away,” Burrows said, calling the four-fifths rule a starting point, “not the end of the analysis.”

Reasonable accommodation

Urmila Janardan, a policy analyst at Upturn, a group that advocates using technology to promote equity, researched AI hiring technologies used in entry-level hourly jobs. She said employers often use personality tests or games to find candidates with certain characteristics, whether or not those traits apply to the position.

A hiring game, for example, might measure things like attention span and the ability to remember numbers, which may require accommodations for someone with a developmental disability. An assessment might also require someone to identify someone’s emotions in an image, which might be more difficult for someone with autism, for example.

“The further a job assessment deviates from the core job functions, the more likely it is to discriminate by disability,” Janardan said. “Is it testing essential job functions or is it just a game? Is it something where you can clearly, obviously see the connection with the work or not? I think this is a very critical question.

The EEOC does not currently track AI-related discrimination data. It’s further complicated by the fact that most candidates wouldn’t know what impact AI tools had on their selection process, according to Ridhi Shetty, policy adviser at the Center for Democracy and Technology.

Applicants and employees should be informed of the AI ​​tools used in their selection process or assessments, and employers should have accommodation plans that also do not require the applicant to disclose that they have a disability , Shetty said.

But employers are rarely upfront about accommodation options when it comes to AI assessments, according to research from Upturn.

“It’s hard to know you need housing,” Shetty said. “It’s hard to know that this particular assessment won’t actually show the employer what you know you would be able to demonstrate in a different way, and without that information being filled out, you don’t have the opportunity so as a candidate or employee seeking advancement to be able to show why you would be suitable for the position.

Who is responsible ?

The 1978 guidelines also do not specify the liability of hiring tool providers. AI vendors often advertise their products as non-biased, but where bias is found, the allegation of discrimination will fall squarely on the employer, unless there is a shared liability clause in their supplier contracts.

“Increasingly, we’re seeing suppliers getting ahead of this issue and being willing to work with employers on this issue, but because the ultimate responsibility lies with the employer, they really need to take the initiative to understand how it will impact. impact,” said Nathaniel M. Glasser, an Epstein Becker Green partner who works with employers and AI providers.

The guidelines, which predate the Americans with Disabilities Act, focus primarily on discrimination based on race and gender. Adapting AI tools to avoid bias against people with disabilities is more complicated because disabilities can take many forms and workers are not legally required to disclose that they have a disability.

Glasser said the conversation about AI bias has increasingly shifted to include the perspectives of workers with disabilities. AI tools are useful for employers who need to sift through tons of resumes or assess relevant skills, and if used correctly could be less biased than traditional assessments, he said. he noted. The attorney said he advises clients to exercise due diligence when it comes to designing and implementing AI tools.

“It is important that employers understand how the tool works and what accommodations can be provided within the tool itself, but also have a plan for reasonable accommodation requests from people who are unable to use reasonably the tool or to be assessed by this tool because of the specific nature of their disability,” Glasser said.

Data collection

In a July 2021 letter to the Biden administration’s White House Office of Science and Technology Policy, the advocacy group Upturn suggested using the commissioner’s charges — a rarely used procedure that allows management to the EEOC to launch targeted bias surveys — and has led surveys to address tech hiring discrimination. It also prompted the agency to require companies to share information about how they use AI tools.

According to Janardan, vendors she’s worked with often struggle to audit their own products and algorithms because employers who use them have no incentive to share their hiring data, which could expose them to lawsuits.

Upturn also called on the Department of Labor’s Office of Federal Contracts Compliance to use its authority to request information about AI tools. The OFCCP, which only oversees federal contractors, is an audit-based agency with more direct access to employer data than the EEOC.

“Given the extent to which employers and providers have an informational advantage in this space, agencies need to be proactive and creative in their strategies to collect data and gain insight into the nature and extent employers’ use of hiring technology,” the Upturn letter said. .

Leave a Reply