The very first anti-bias in AI legislation is shedding light on employer concerns
With the world’s first AI anti-bias law coming into effect in New York City on April 15th, the question on all HR leaders’ minds is how far can we trust tech to be non-discriminatory?
The new legislation requires that any AI system used by the state must be tested for potential biases and the potential for discrimination against protected classes. The new laws also ban algorithms from using protected characteristics – such as age, race, gender and sexuality – when making any decisions. But does these laws go far enough? Or should leaders be looking deeper when it comes to their internal robotics?
With strong opinions on both sides, it’s a divisive issue in HR circles - but one that’s been given a lot of thought in studies of late. According to a report from the AI Now Institute, AI-driven hiring systems are twice as likely to reject female applicants than male applicants.
Further, a 2018 Boston Consulting Group survey found that 84% of executives reported their AI algorithms as only reinforcing existing gender or racial bias. It’s not good reading for any tech-heads, but it doesn’t necessarily mean that you should avoid AI altogether.
“Bias and discrimination is undoubtedly embedded in your hiring process because, as humans, we are inherently biased,” says Linda Ho, chief people officer at software development company Seismic. “And, unfortunately, leveraging artificial intelligence to help streamline the hiring process has the potential to amplify biases, because humans are the ones informing the algorithms that make these decisions.
“This can take many forms: some AI-driven solutions analyze resumes with keywords that lean into class bias, while other tools measure how candidates perform in a video interview, which may amplify bias toward specific groups of people.”
While this may seem disheartening, there is a role for AI to enhance processes and help mitigate biases. If used correctly, and always under the supervision of a human, AI and algorithms can be used to identify and remove structural biases in the organizational life cycle. According to data from iCIMS, 80% of employers are now using AI in their hiring processes – with that percentage only set to increase as AI becomes more intelligent.
Speaking with Ho, she tells HRD that there’s three steps to ensuring you’re leveraging your AI in the correct way – and not falling prey to the looming pitfalls.
“The first step to preventing discrimination in AI-based employment screening is ensuring that DEI is foundational to a company’s talent strategy and a key component of the company’s culture and values,” she says. “As HR leaders, it’s our role to foster a diverse, equitable, and inclusive workplace that is embedded in how we operate, including our hiring practices.
“The second step is to never rely on just one input for decision making, but instead collect multiple data points so a diverse set of perspectives are represented. AI-screening tools can mitigate bias if it’s used to standardize the interview process across diverse interview teams.
Thirdly, Ho says it’s about being vigilant in preventing bias – beginning with educating recruiters and hiring managers to recognize biases, as well as how to intentionally shift processes to mitigate against them.
Not everyone is all abord the AI train, however, with opposition coming from the most surprizing places. Earlier this year, Tesla and Twitter CEO Elon Musk and Apple co-founder Steve Wozniak called for a six-month pause in developing even more powerful AI platforms.
In an open letter, the group of over 2,000 people warned: “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.
“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
And while it may sound like something from a 90’s sci-fi movie, for employers the realities of getting AI wrong can spell hefty legal bills.
Speaking to HRD in a previous interview, Mike MacLellan, partner at Ontario-based law firm CCPartners said that if AU makes a mistake, as the employer you’re still held legally accountable. Essentially, you can’t start pointing fingers and shifting blame to the robots.
“Employers need to take responsibility for the entire organization,” he added. If you’re putting any kind of faith into a computer program, you’re ultimately responsible for the output. It’s no different from putting your employee on a forklift in the warehouse – that piece of machinery needs to be in working order. And if something goes wrong, the employer is liable.”
The bottom line with AI, robotics and algorithms’ role in HR is that it’s better to walk before you run. As Dave Burchfield, global director of people strategy at McDonald’s told HRD: “It's fun to go after the shiny new thing – but get the plumbing right first.”