Legal expert cites 'potential upsides… and dangerous downsides' of using AI tools in recruitment
As employers turn to artificial intelligence (AI) to cut the time and cost of recruitment, they run the risk of overlooking some of the best talent.
A new survey by Capterra found that an over-reliance on AI can backfire. While 62% of jobseekers believe their chances of being hired increase when AI is used, 38% say they would reject offers from companies that heavily depend on AI during the hiring process.
Another 60% of job seekers also prefer to apply to jobs where there is an opportunity for human interaction at any stage of the hiring process.
"Surprisingly, job seekers most familiar with AI are the ones most likely to walk away from AI-heavy recruiting processes," according to the report.
If employers sense AI can give them an edge in the war for talent, they will explore it, said Cilla Robinson, a partner in the employee relations and safety team at King & Wood Mallesons in Sydney.
“But just like with every aspect of AI, there are potential upsides and also some very dangerous downsides.”
AI can save recruiters a huge amount of time when the pool of candidates is very large, by collating data and filtering CVs so that a first cohort of applicants doesn’t need to be reviewed by HR staff.
“That’s generally how it is being used,” Robinson said. “That’s an obvious and easy win – but then you have the bias component.”
Recruitment is not an exact science, she said: “It is prone to bias, and people are looking to AI to help with that,”
Perhaps the most serious flaw in AI is the possibility for it to replicate patterns of bias at enormous scale if it is guided by past hiring, Robinson said.
She cited the example of a law firm recruiting for summer clerks out of university. The firm may hope to diversify its workforce, but an algorithm might zero in on obvious criteria: grades, interests, society memberships, etc.
“In a top-tier law firm context, if we just got AI to look at the past 10 or 20 years of successful summer clerks, I would say … we would essentially be having a risk of a bias towards privileged university students who went to a certain school, have had the time for extra-curricular activities and join clubs or societies, in addition to good grades and other traits we look for” she said.
“We’d be potentially discriminating against people of a lower socioeconomic background, for example, that are working at uni instead of participating in Lawski [a snow skiing club].”
If an AI tool goes in search of candidates similar to who has been successful in the past, it may exhibit an inherent bias against women, she said. That will see it weeding out women’s colleges or references in CVs to women’s interests.
“The AI tools are essentially learning what is preferred, and that is a real danger,” she said.
It can be a case of garbage in, garbage out, where the sophistication – or lack of sophistication – in an algorithm will ultimately determine whether AI delivers a positive or negative experience, Robinson said.
“There can be massive, massive benefits to eliminate bias, but it can also have the exact opposite effect if it’s not used in a sophisticated fashion.”
Some organisations are attempting to defuse bias in AI by de-identifying an applicant’s gender and removing age, or removing anything that might indicate a health issue or disability irrelevant to their ability to do the job.
“It is then purely looking at skills, tertiary expertise, the things we should be making these decisions on rather than all of those other things that a human gets attracted to, such as ‘That person went to my school’ or ‘They have the same hobbies as me,”’ she said.
It’s not as simple as engineering bias out of off-the-shelf AI tools.
“Bias in Australia might be different from what it looks like in India or bias in China would be very different from in the US,” she said. “It is actually quite hard to mitigate against all bias that exists in the world, but that’s essentially what we’re expecting when we use AI tools and recruitment.”
It’s not impossible for a human to outwit a robot. Robinson cites an example where a rejected applicant changed their age and then landed an interview – and went on to accuse the employer of discrimination.
There’s also nothing to stop an applicant changing hobbies listed in a CV from female to male pursuits.
“There are examples where the AI has essentially perpetuated existing discrimination we have in society.”
If the talent pool is to push back against AI, it must know that AI is being used. But most candidates don’t know.
“There is no obligation for employers to disclose that they’re using AI,” Robinson said.
“Even if they are disclosing it, there’s no way for anybody to interrogate whether it’s being used with altruistic, positive objectives that advance diversity, equity and inclusion … or is it being used to filter out anyone that, in an American context, for example, didn’t go to Harvard.”
Robinson said the potential for bias when using AI in recruiting highlights a gap in discrimination law.
“Our laws talk about discrimination being a person engaging in the discriminatory conduct,” she said. “If an algorithm is discriminating, can the employer be held liable?”
It is critical that HR teams are trained properly in the use of AI. Learning to use AI is an iterative process, Robinson said. An organisation that wants to counteract against every possible outcome may only hamper the innovation.
“The algorithm will do whatever it’s going to do, it just needs to have an ethical overlay to it that respects human rights, that respects human values and supports DEI principles. That’s the starting point.”
The Artificial Intelligence Expert Group, a federal government body, has lots of information for HR professionals to stay abreast of what is happening, she said.