Employment lawyer Shana Wolch previews artificial intelligence session for HRD’s Employment Law Masterclass Vancouver
"AI is already making its mark, and the key now is how we harness its potential while mitigating associated risks," says Shana Wolch, partner at McCarthy Tetrault LLP.
Wolch will be speaking at the upcoming virtual event Employment Law Masterclass Vancouver, which will be held on Feb. 22, 2024. Her session is titled ‘Legal implications of AI in workplace.’
At the event, Wolch will discuss the impact artificial intelligence will have on employers, the ethical and legal HR issues surrounding the use of AI and both the risks and benefits of AI.
At the masterclass, Wolch will discuss how employers are increasingly recognizing the value AI brings to the table. She highlights the uptick in productivity and efficiency resulting from AI integration. Moreover, employees stand to gain from AI adoption, as it frees up time for more engaging tasks and personal pursuits, she says.
"Employers are witnessing tangible benefits, such as streamlined workflows and reduced overhead costs.”
However, amidst the enthusiasm for AI's benefits, Wolch cautions against overlooking potential drawbacks. One significant concern is the restructuring of job roles, potentially leading to employees feeling sidelined, replaced or even able to allege a constructive dismissal.
"AI implementation could inadvertently redefine job duties, triggering legal implications, especially in unionized environments," she says.
“Employers may not actually redefine job duties in writing or a job offer, but the implementation itself might do that, and employers may need to think about how they implement the use of AI and whether they’re enhancing an employee's job responsibilities or replacing them to a point that wasn't contemplated and could amount to a constructive dismissal.”
The ethical and legal considerations surrounding AI extend beyond employment restructuring, with Wolch emphasizing the importance of safeguarding privacy and human rights in AI utilization. The reliability of AI-generated outputs also poses challenges, due to the risk of misinformation or “deep fakes,” underscoring the need for discernment in AI reliance. Intellectual property infringement and biased data are additional areas of concern that demand attention, she says.
“There's a big, underlying real concern about things becoming untrue. With the capabilities of AI, or the frailties of it, we could be facing a lot of either unrealistic, interesting, or untrue, data that we’re relying on, and that’s a scary concept.”
In response to these challenges, regulatory frameworks are emerging to govern AI usage. Wolch highlights proposed legislation such as Bill C-27, the Digital Charter Implementation Act, which aims to establish standards for AI governance. The legislation emphasizes transparency, safety, and accountability, with hefty fines for non-compliance.
Alongside legislative efforts, industry-led initiatives are shaping ethical AI practices. Wolch discusses voluntary codes of conduct that emphasize accountability and transparency, which aim to mitigate risks and ensure human oversight in AI systems.
“I think having an ongoing assessment of reasonably foreseeable potential risks and how to mitigate is going to be a really big feature that people care about,” she says.