Using AI without a rulebook? Here's where HR should be careful, say lawyers

'It's hard to create responsible policies when you aren't understanding how these products are working'

Using AI without a rulebook? Here's where HR should be careful, say lawyers

HR is implementing AI technology more and more, but the legislation around it is not keeping pace, according to a recent report.

Law firm Littler surveyed 400 in-house lawyers, HR professionals and other business leaders in the U.S., all of whom are decision makers around how their firm’s HR uses AI in its functioning. The results? There’s a real discrepancy between the adoption of predictive AI by HR departments, and adherence to or knowledge of guidance around how to use it.

Organizations have become aware of the benefits that AI tools can have for their processes, said Littler L&E attorney Alice Wang. But a lack of guidance or legislation can open them up to risks.

“I think that there's a collective consciousness that trustworthiness is not an inherent quality of artificial intelligence, and that trustworthiness is really the product of an intentional alignment of the people and the process,” said Wang.

“The technologies and companies who are implementing these tools, they want to do it ‘right’, and they want to do it in a responsible, ethical, trustworthy manner, but they're not necessarily sure of how to do that. And so they are looking for guidance from regulatory bodies, and legislative agencies to guide that.”

Predictive AI for hiring can violate ADA

One of the most popular uses of AI tools so far in HR is to help screen and rank job applications, usually with third-party vendors acting as agents for HR.

Equal Employment Opportunity Commission (EEOC) chair Charlotte A. Burrows said in January that “as many as 83 percent of employers and up to 99 percent of Fortune 500 companies now use some form of automated tool to screen or rank candidates for hire.”

Virtual assistants, resume scanners, video interviewing software, and testing software that rank candidates according to their scores are some of the areas where employers can inadvertently commit discriminatory practices.

Charlotte Carne, senior L&E counsel for Dykema, asserted that HR professionals have to be especially careful around these practices, as it can result in violations of the Americans with Disabilities Act (ADA) if applicants are rejected because a disability prevented them from passing a certain screening step. As an example, Carne described a job applicant with limited manual dexterity, being rejected from a job because a test required keyboard use.

“If you're using just AI, the applicant could be rejected without consideration of a reasonable accommodation,” she said. “If you're using AI and a human resource professional, the applicant could be accommodated with a reasonable accommodation, such as a voice-to-text software.”

Best practices for third party AI hiring

Carne also stressed that employers are liable for discrimination even if a third-party provider was doing the screening for them.

“The EEOC has made very clear that the employer will be liable for mistakes made by third party vendors. So, it's really important for employers to screen vendors,” she said, “even if they advertise that they’re bias-free.”

This includes asking potential vendors about any litigation history or complaints against them. Also, it is key to ensure that AI vendors conducting applicant screening forward any requests for accommodations directly and immediately to the employer.

Also, always inform job applicants if they are being screened by AI, Carne said, and ensure that it is communicated clearly to them that there are accommodations available upon request, as well as how to request them.

Lastly, Carne recommends at least a yearly internal audit of hiring results and processes to assess for any biases and address them.

“You need to make sure that you're auditing your own use, and if it's biased, one way or the other, that you are okay with abandoning that use and starting over,” Carne said.

Legislation to control AI use in workplaces on its way

Currently, New York City is the only jurisdiction in the U.S. that has an active law regulating employment-related AI use. It is generally accepted that California’s laws, when enacted, will set the standard for the rest of the nation.

Last month California governor Gavin Newsom enacted an executive order for state agencies to analyze how they anticipate AI technologies will be used in their work. The San Francisco Chronicle reported that Newsom said he was taking a “deep dive” into AI.

California senator Scott Wiener described a bill he plans to bring to the floor next session that would propose a state agency dedicated to regulating AI development and also make AI developers more responsible for the end-uses of their products, the Chronicle reported.

President Joe Biden also signed an executive order last month during an appearance in San Francisco, saying, “We can’t kid ourselves, there’s profound risks if we don’t do it well.”

Predictive vs. generative AI use in HR

However, the executive orders mainly apply to generative AI, Wang pointed out, while most HR departments are utilizing predictive AI tools for their functions.

More than half (56%) of the survey respondents said they don’t use generative AI at all in their HR processes. Of those who do, 34% use it for creating materials such as job descriptions, onboarding documents and other employee communications. These applications can involve more opportunities for litigation, the report stated, such as “potential defamation, consumer protection, liability, privacy, intellectual property, ethics, and regulatory compliance issues.”

The National Labor Relations Board (NLRB) has also said that it is illegal for AI to be used in interference with workers’ rights to organize and strike, particularly when used to collect data or for employee surveillance.

Wang pointed to education of employees on ethical AI use as the main priority for HR departments wanting to use the technology responsibly and avoid litigation risk.

“It's hard to create responsible policies, hard to create responsible guardrails, when you aren't even quite understanding how these products are working,” she said. “I think we're in a timeframe where there's a lot of catchup on all sides, lawmakers and leaders. And that just makes it a little bit more challenging.”