Employment lawyer weighs in on the human vs robot debate
With more and more Canadian employers opting to dabble in ChatGPT and AI, employees are understandably anxious.
Data from KPMG’s recent survey, released this month, found that 37% of organizations are looking at generative AI - with 35% of Canadian companies already using the bots in organizational processes. As AI continues to grow and thrive, the question on both employers and employees’ lips is “Can a worker be fired and replaced by a robot?”
“The general answer to that is yes, they can,” says Gavin Marshall, partner at legal firm Roper Greyell LLP. “An employer is entitled to terminate an employee for any lawful reason in Canada - and the introduction of technology which makes employees redundant in the work they were previously doing can result in layoff.”
There are some nuances to that, however. It comes down to a judgement call about whether an employer can replace an employee with AI – or if they reassign them to new duties. And the issue is only further compounded in unionised workplaces, Marshall tells HRD.
“While an employer would be entitled to lay off workers in a unionised workplace due to technological change, those kinds of layoffs will be subject to collective agreement provisions. These may restrict, or sometimes even prohibit, layoffs due to technological change. As such, you have to check the specific language in the collective agreement in a unionised workplace in order to determine what the rights of the employee might be.”
Attribution concerns with algorithms
Essentially, there’s subtlety all round – nothing new in the legal landscape. While questions are still swirling around the legalities of being ousted by a robot, employers might consider focusing their minds on expanding internal AI policies.
Some organizations have taken the step of banning ChatGPT over security concerns. However, for Marshall, the main point in the area of policy would be attribution – or a lack thereof.
“The most important word here is attribution,” he tells HRD. “If your employees are performing functions with ChatGPT, then you might consider a policy which ensures the employee is attributing that work. For instance, if a worker is using the tool to write a letter, which is then drafted by ChatGPT, the employee would be attributing the AI.”
Be careful what you say, it could come back to haunt you
It’s all about ensuring transparency. Even if the content has been built by the AI and then modified by the employee, you should have transparency around the involvement of artificial intelligence in the work product.
“Another foggy issue right now is the impact of privacy law on the manner in which ChatGPT might do their work,” adds Marshall. “That’s a much larger question for employers, looking at the data that commercial parties place into the digital domain. These AI tools perform a background learning function, which allows them to be so powerful by scraping the internet for data and then digesting it.
“It probably behooves us all to think with renewed caution about what it is that we put out into the public domain. Because it could be regurgitated and used by an AI tool.”
And, when it comes to breaching an AI policy, the repercussions are similar to those for breaching any other policy.
“In British Columbian law, a breach of a policy can result in the disciplining of that employee and possibly the termination of that employee if the if the breach is serious enough,” says Marshall.
“Employers would have discretion to deal with that. If an AI policy exists, and an employee ignores it by, for example, failing to attribute, then one question might be whether or not an act was intentional. Was it serious or not? Was there a desire or an intention to deceive?
“All those things would roll into the employer’s consideration. Ultimately, any policy breach can result in an employee being warned or disciplined or even terminated, if it's serious enough - AI would be no different.”