Quality data set needed to reduce AI's recruitment bias

'We saw the impact of bias in training data sets on our AI model very clearly'

Quality data set needed to reduce AI's recruitment bias

Having a "quality data set" is a critical element in reducing bias during recruitment, especially as more organisations utilise AI in the hiring process.

The use of AI in recruitment has yielded great benefits for HR teams, but there have been recent concerns if the technology is really unbiased as initially believed.

In the case of Martian Logic, CEO Anwar Khalil said they attempted to refine their recruitment process with a new AI model years ago.

However, Khalil said he immediately noticed that unconscious biases started leaking into the training data set and manifested in the training model's behaviour.

"We saw the impact of bias in training data sets on our AI model very clearly. So, how do you remove it?" the CEO previously said in an exclusive feature with HRD.

Diversity is an answer to this challenge, according to Khalil.

"But how can you create diversity, and is there such a thing as 'infinite' diversity to make your bias zero? I don't think that's ever really going to be possible," he said.

This is where the focus on having a quality data set comes in, he said, while underscoring the importance of having something that can help train the model to steer away from biases.

"In recruitment, you might be going through a list of people that have applied for a role. You might look at a first and a last name, and based on the world that you grew up in, an unconscious bias might creep into your mind and result in a quick decision that someone with this name couldn’t do this job," he said.

"That applicant will be placed into the 'no' pile, and you'll move on to the next candidate. The AI needs to learn that it shouldn't take that opinion into account."

Khalil further discussed this recruitment challenge, and how AI is redefining HR, in this exclusive paper with HRD.