What should employers be considering when it comes to AI in the workplace? We went directly to the source
The world was recently introduced to Open AI’s ChatGPT, an artificial intelligence (AI) chatbot that uses machine learning and natural language processing to generate responses based on prompts such as “Write me an article about AI.” Over 100 million users have used ChatGPT, and such programs will revolutionize how employers operate, according to trends tracker Exploding Topics.
ChatGPT has demonstrated that AI tools can perform a wide range of business functions, including data collection, human resource (HR) solutions, and research. Given these benefits, some employers have entered into partnerships with technology companies to develop customized AI chatbots. KPMG in Australia, for example, is using a modified version of ChatGPT as a digital assistant for some of its employees.
We asked ChatGPT to summarize some of the key employment law risks for employers who use AI in their workplace. We were pleasantly surprised that the chatbot demonstrated an awareness of its own limitations and identified legal risks relating to human rights, privacy and potential breaches of the employment contract. We reviewed the answers we received and explore some of the issues AI in the workplace raises.
Ensuring workplaces are free from discrimination
To prevent discrimination in the workplace, be aware that processes and materials produced by an AI tool could inadvertently introduce bias. With the support of legal counsel, it’s important for HR to carefully review from an equity lens and address issues of direct and indirect discrimination that could amount to a violation of an employer’s duties.
According to ChatGPT itself, "Employers should be mindful that ChatGPT’s responses are generated based on the data it has been trained on, which may include biases. Employers should take steps to ensure that ChatGPT’s responses do not discriminate against any individual or group of employees.”
For example, an AI chatbot may draft a job description that excludes certain protected groups, such as those who require accommodation for a physical disability. Similarly, a chatbot that is not equipped to understand a potential report of misconduct based on slang, GIFs, or emojis may overlook behaviour that is harassing or discriminatory. In addition to human rights considerations, these issues implicate an employer’s obligations under occupational health and safety statutes to investigate incidents of workplace harassment.
Some HR technology companies have even found that, when asked to draft performance feedback for hypothetical workers, ChatGPT emphasized gender stereotypes to determine the pronouns of certain positions, referring to kindergarten teachers as “she” and mechanics as “he.” While this issue could likely be avoided with precise prompts, it is worth noting that , in a study by Textio, ChatGPT was also found to be more critical of female employees than their male counterparts. From an employment law perspective, this scenario presents litigation risk, as negative performance reviews generated with the assistance of AI can potentially become the basis for a discrimination claim.
While ChatGPT recognized its own potential biases, it’s also important for employers to recognize situations in which employers intend to differentiate between employees based on bona fide occupational requirements, in accordance with human rights law. The standard for a bona fide occupational requirement is based on considerations that are likely to require the sort of nuanced assessment that AI would be unable to perform without the direct guidance, scrutiny and oversight of an experienced HR professional.
Employers should review AI-produced materials and limit the use of AI to low-risk tasks, under the oversight of experienced HR leaders and management.
Maintaining the privacy and confidentiality of information
The use of AI chatbots in the workplace could potentially breach privacy laws where its application collects, uses or discloses personal information in a manner that is not compliant with privacy laws. For example, an AI chatbot may collect and analyze data related to an employee’s performance, behaviour, and medical information, if this information is shared with the application. This creates issues where appropriate consent hasn’t been obtained. Privacy risks and the need to safeguard data and build in controls to avoid violating privacy laws is a key consideration. Recently, the Canadian Office of the Privacy Commissioner opened an investigation into ChatGPT, following a consumer complaint touching on these issues.
In our interview, ChatGPT recognized its potential impact on privacy, stating that “employers need to ensure that they are complying with relevant data privacy laws when using ChatGPT. This includes ensuring that any personal information or data processed by ChatGPT is being handled in a secure and confidential manner.”
In a follow-up question, ChatGPT also correctly identified that provincial Canadian privacy legislation exists in British Columbia and Alberta, as well as federally, in the form of the Personal Information and Protection of Electronic Documents Act (PIPEDA). However, as a note of caution, ChatGPT’s response did not reference Québec’s provincial privacy legislation, but did refer to Ontario’s Personal Information Protection Act, which was introduced in 2018 but never passed into law.
While ChatGPT’s response suggested that Canadian privacy legislation directly addresses the use of AI chatbots, there remain gaps in existing laws. The response we received also didn’t reference Bill C-27, the federal government’s proposed Digital Charter Implementation Act which, if passed, would replace PIPEDA with the Consumer Privacy Protection Act and enact the Artificial Intelligence and Data Act. Although this legislation is still under review, it aims to directly regulate AI in Canada in specific sectors and address some of the gaps in the current legislative framework.
Despite the limitations of ChatGPT’s response on the specifics, a key consideration for employers who implement AI in their workplace is the potential impact on their obligations under privacy law. We recommend working with legal counsel to conduct a review of the employer’s existing privacy, confidentiality, and IT policies to determine what revisions must be made to account for the use of AI, including chatbots. Employers should also track the progress of Bill C-27 and the federal government’s efforts to regulate AI, which may impose additional compliance obligations on employers.
Monitoring the risk of breaching an employment contract
Finally, ChatGPT highlighted the risk that using AI technologies may result in a breach of the terms of the contract between employers and workers. Focusing specifically on collective bargaining agreements, ChatGPT pointed out that “employers must ensure that their use of ChatGPT complies with any collective bargaining agreements that are in place with their employees. This may involve negotiating specific terms related to the use of AI technologies in the workplace.”
In fact, a breach of a collective agreement (or employment contract in the non-union context) could occur if AI use results in significant changes to job duties or responsibilities or changes to the employees’ working conditions or workplace culture. These possibilities aren’t just theory - modern companies have increasingly adopted AI technology to partially or fully complete tasks that were previously performed entirely by humans.
To address the risk of a breach of contract claim in the non-union context, employers are well-advised to review their employment contracts to determine whether they have explicitly reserved the right to unilaterally change an employee’s duties, responsibilities or working conditions. Similarly, employers in unionized workplaces should regularly review their management rights when it comes to making changes to the workplace that would result from incorporating AI technologies.
Is ChatGPT the answer to work, the universe, and everything?
Taking inspiration from The Hitchhiker’s Guide to the Galaxy, we closed off our interview by asking ChatGPT if it is the answer to work, the universe, and everything. We received this response: “While ChatGPT can provide useful assistance in certain tasks and scenarios, it cannot replace the value of human expertise, creativity, and empathy. It is important to recognize the potential benefits and limitations of technology, and to use it in a way that enhances, rather than replaces, human capabilities and wellbeing.”
Despite these reassurances, AI technology continues to advance and be adopted in workplaces at a rapid rate. A recent survey by KPMG in Canada found that nearly two thirds (65 per cent) of companies in the US are regularly using ChatGPT to improve their operations, compared to about one-third (37 per cent) of Canadian companies. We anticipate that businesses in Canada will increasingly adopt AI and narrow this disparity in the coming months and years. As they do so, advance planning and navigating the resulting legal issues will be key.
Introducing an AI chatbot to the workplace without appropriate guardrails carries the risk of violating any number of federal and provincial employment and labour laws. In particular, employers should be prepared to update contracts, policies and procedures to address potential issues relating to AI in the workplace - such as human rights, privacy, and contractual obligations and provide the necessary training and support to employees.
Matthew Stanton and Maciej Lipinski are employment and labour lawyers at KPMG Law LLP.