AI-based hiring tools can violate Americans with Disabilities Act

Employment lawyer breaks down what employers need to know about new EEOC guidance

AI-based hiring tools can violate Americans with Disabilities Act

Does your hiring process consist of scanners that evaluate resumes based upon keywords, video interviewing software or testing and monitoring software?

If so, you need to be careful that these tools based on algorithms or artificial intelligence (AI) don’t hinder people with disabilities’ chances of gaining employment.

The U.S. Equal Employment Opportunity Commission (EEOC) has recently issued guidance warning of three common applications of these tools that could violate the Americans with Disabilities Act:

  1. Failing to provide a reasonable accommodation necessary for an applicant to be evaluated fairly by an algorithm or AI-based tool;
  2. Using a decision-making tool that “screens out” an individual with a disability by preventing the applicant from meeting selection criteria due to a disability;
  3. Using a decision-making tool that incorporates disability-related inquiries or medical examinations.

According to the EEOC, employers are responsible for vetting potential bias in AI-based hiring tools, even if the software is provided by a vendor.

“The guidance will encourage employers to think more critically about the types of technologies they use to evaluate and screen candidates,” Lauren Daming, employment and labor attorney at St. Louis-based law firm Greensfelder, Hemker & Gale, told HRD. Daming is also CIPP certified – the Certified Information Privacy Professional (CIPP) helps organizations around the world bolster compliance and risk mitigation practices.

Read more: Microsoft director: Fill job openings with neurodivergent candidates

“While AI and algorithmic decision-making programs have benefits, the guidance highlights their potential to negatively affect certain candidates,” Daming says. “Employers should make sure they understand how these technologies work, what they evaluate and how they might potentially disadvantage certain applicants before adopting them.”

While the guidance highlights the potential for disability discrimination and the need for accommodations, AI can also negatively affect people of color, women and candidates in other protected classes. Understandably, there’s a privacy concern associated with these technologies, which may capture biometric characteristics such as facial templates.

Daming says that several state laws regulate the collection and use of biometric data, and a few states specifically regulate the use of AI in hiring tools. “Apart from actual regulation, people are becoming more concerned about how their data is captured and used by companies,” Daming says. “They may also be concerned that an algorithm rather than a human is evaluating their potential for a position. There’s a creepiness factor to that.”

The guidance is part of the EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative, an agency-wide initiative launched last year to ensure that the use of software, including AI, machine learning and other emerging technologies used in hiring and other employment decisions, complies with the federal civil rights laws that the EEOC enforces. The initiative’s goal is to help employers, employees, job applicants and vendors make sure these technologies are used fairly and consistently with federal equal employment opportunity laws.

The Federal Trade Commission (FTC), which monitors companies for unfair or deceptive business practices, has also recently aimed a spotlight on AI, Daming says. In April 2021, the agency released informal guidance advising companies to be aware of potential bias when using algorithmic decision-making software. In December 2021, the FTC announced its desire to pass rules ensuring that algorithmic decision-making doesn’t result in unlawful discrimination.

“The best thing employers can do is be transparent with applicants and employees about how the technology works,” Daming says. “That gives individuals control over whether they want their data to be evaluated and whether they may need an accommodation. Of course, that requires that employers fully understand the technology and how it may impact job candidates.”

More than a dozen of the world’s largest employers agree that bias is a major issue when it comes to algorithms for recruitment, prospecting and hiring purposes. In December 2021, the Data & Trust Alliance formed to focus on responsible data and AI practices. Members include Walmart, Meta (formerly known as Facebook), IBM, American Express, CVS Health, General Motors, Humana, Mastercard, Nielsen, Nike, Under Armour, Deloitte and Diveplane. Collectively, they employ more than 3.7 million people, according to a press release.

In April, the Neurodiversity @ Work Employer Roundtable and Disability:IN, a global nonprofit headquartered in Alexandria, VA, have joined forces to launch the Neurodiversity Career Connector (NDCC), a career portal dedicated to neurodivergent job seekers. This new marketplace connects neurodivergent people with companies already committed to neurodiversity hiring programs with open roles, such as HR, finance, customer service and science, technology, engineering and math (STEM) positions.

The roundtable started in 2017 with six founding members: Microsoft, DXC Technology, EY, Ford, JP Morgan Chase, and SAP. California companies that have since joined the roundtable include Google, Hewlett-Packard (HP), Chevron, Wells Fargo, Qualcomm, Salesforce, VMware and Warner Brothers.