HR's AI dilemma: Game-changing lifeline or legal minefield?

Laura Blumenfeld of Blakes to offer insights for HR at upcoming Employment Law Masterclass

HR's AI dilemma: Game-changing lifeline or legal minefield?

To ChatGPT or to not ChatGPT, that is the question for HR leaders everywhere. And when you compound employment law into the equation, the theoretical queries become even more pronounced.

Laura Blumenfeld, Partner at Blake, Cassels & Graydon, says she’s keenly aware of the transformative potential that AI brings to employers – as well as the challenges.

"Employers are excited to take advantage of innovative new tools, which can be helpful in a number of areas, from recruitment to performance management and of course, productivity and efficiency.”

Blumenfeld, who’ll be speaking at HRD’s upcoming Employment Law Masterclass on the topic of AI, believes that the new tech is here to help – in conjunction with its human counterparts.

"Instead of manually reviewing essentially thousands of job applications, AI programs can now narrow the applicant pool in a matter of minutes."

Employers can input job descriptions, qualifications, and information about past successful candidates into AI programs, which then learn from this data to screen, rank, and shortlist candidates. These AI tools go beyond just processing applications—they actively search online job boards and resumes to identify potential candidates who may not have applied directly. Once the pool is narrowed, AI can communicate with promising candidates on a mass scale through automated messages and chatbots.

These chatbots play a crucial role in enhancing the candidate experience. Blumenfeld highlights their effectiveness:

“Chatbots in particular are helpful because they can help provide real-time answers to questions which might help further filter candidates."

For example, if a candidate asks about working hours or pay scale, a chatbot can provide an immediate response, allowing the candidate to self-select out of the process if the conditions do not align with their expectations.

Beyond recruitment, AI is making strides in automating tasks traditionally performed by employees, thereby enhancing efficiency and productivity. This automation is not just about getting work done faster; it also aims to reduce employee burnout.

As Blumenfeld puts it, AI allows employees to "focus on the substantive aspects of their jobs, while the AI tools do the more administrative tasks." For instance, an HR team might use AI to draft policies or agreements, but Blumenfeld cautions that "AI is a tool to help in the job performance that shouldn't be a replacement for human oversight."

Areas of concern with AI

In performance management, AI is particularly valuable in a world where remote work has become the norm. These tools can monitor and track employee engagement and productivity, says Blumenfeld.

“By the time a manager has to prepare a performance review, it might be hard to remember details from earlier in the year, but AI can look back and analyze the data about performance, attendance, goal achievement and other details and make recommendations on how an employee performed."

However, Blumenfeld acknowledges that while AI can be a powerful tool, it must be used with caution to avoid unintended consequences.

"There are certainly areas of concern," she says, emphasizing the importance of embracing AI's benefits while being mindful of its risks. Chief among these concerns are human rights and privacy issues, which Blumenfeld identifies as the most significant in the employment context.

From a human rights perspective, AI is theoretically supposed to reduce the risk of discrimination by making decisions based on data rather than human biases. However, this is not always the case.

 "AI is only as good as the programmers and the data inputted into the system," Blumenfeld explains. If the data used to train the AI contains errors or reflects human biases, there is a risk of biased output. For example, if a recruitment AI was programmed to favor candidates similar to past successful applicants who were predominantly male, it might inadvertently filter out female candidates.

Privacy concerns are another critical issue. Employers must be cautious when using AI to collect candidate information, especially during the recruitment process. Blumenfeld warns that "AI recruitment programs won't only review information provided by an applicant but will also scrape the internet for any additional information it can find about the applicant."

This can lead to the collection of inaccurate or irrelevant information, raising significant privacy issues. To mitigate these risks, employers should ensure that AI is programmed to consider information from reliable sources and be transparent about what data they are collecting.

Regulatory framework for AI

And, when it comes to developing a regulatory framework for AI, Blumenfeld stresses that this is highly client-specific, as different workplaces use AI tools for different purposes.

“It's important to remember to circulate the policy and ideally have employees confirm that they've read it and that they agree to it."

Essentially, as AI continues to evolve, so too do the laws and regulations governing its use. Blumenfeld advises employers to stay up to date by regularly consulting with employment counsel to ensure compliance with new laws.

"The best way to mitigate this risk is to ensure...that there are policies in place regarding what AI systems can be used, when and how they can be used and who can use them.”

Want to learn more about upcoming changes to employment law and legislation? Register for HRD’s upcoming Employment Law Masterclass here.