Top 6 issues for employers in year of AI agents

Expansion of AI use brings opportunities, legal risks

Top 6 issues for employers in year of AI agents

A new co-worker has arrived at your workplace.

They never take breaks, never forget instructions, and execute tasks with incredible efficiency. This colleague doesn’t ask for a raise and doesn’t engage in office politics. To top it all off, they are polite and friendly - they always know just what to say.

At first glance, the arrival of AI agents in the workplace may seem like a utopian vision. However, there’s another side to the story. Unlike human colleagues, AI agents lack intuition, emotional intelligence, and ethical reasoning.

They may make confident but incorrect decisions, struggle with ambiguous tasks, or behave unpredictably when encountering new situations. In some cases, they might even reinforce biases or create compliance risks that employers may struggle to foresee.

The rapid emergence of AI agents represents a transformative moment for Canadian workplaces. Employers must prepare for both the opportunities and challenges posed by this new wave of automation.

We’ve become familiar with generative AI models, such as ChatGPT, which focus on creating text, images, or code based on prompts. AI agents extend beyond content generation to act autonomously within set objectives. AI agents can:

  • Execute multi-step tasks without constant human intervention.
  • Make decisions based on learned patterns and real-time data.
  • Interact dynamically with users, software, and even physical environments.

These characteristics allow AI agents to revolutionize industries by taking on roles traditionally performed by humans. For example, AI agents currently on the market may be used to handle customer service phone calls, manage marketing campaigns, and perform increasingly complex administrative tasks. However, AI agents also introduce unique risks that employers must address proactively.

Issue 1: Hallucinations, ethical risks, and quality control

Let’s start with the obvious - AI agents are not perfectly reliable. Many of the risks of AI agents overlap with those associated with generative AI, such as risks related to hallucinations, unfairness and bias in decision-making. However, AI agents exacerbate these risks, since they have the potential to act independently without human oversight.

Employers should consider the following strategies for mitigating risk:

  • Gradual deployment with human oversight: Initially, AI systems should operate in a limited capacity, with human review over any decisions that could significantly impact employees or customers. Employers should consider restricting AI agents from operating independently in any areas in which errors could have serious consequences, such as hiring, firing, financial management, or workplace discipline.
  • Due diligence on AI vendors: Third-party AI systems should be thoroughly vetted to ensure they comply with ethical standards and legal requirements. Look for vendors that provide transparency in how their AI models were trained and how they operate. Look for AI companies that implement third-party testing to detect and mitigate bias and unfairness.
  • Access restrictions: AI agents should be granted only the necessary access to databases and systems, reducing the risk of unintended harm.

Issue 2: Privacy and data security

Privacy is another crucial concern with AI agents. AI systems often process vast amounts of personal and organizational data, raising concerns about data security, employee privacy, and regulatory compliance.

Employers must ensure that AI systems adhere to applicable privacy laws, such as Quebec’s Law 25, which requires transparency in automated decision-making. Organizations should conduct a privacy impact assessment (PIA) prior to investing in an AI agent with access to personal information. Organizations should also:

  • Establish a need to use AI agents for the given purposes (and avoid using privacy-impacting agents without a demonstrable need).
  • Use AI models that prioritize data minimization and anonymization.
  • Use AI models with appropriate safeguards in place.
  • Obtain informed consent before deploying AI-driven monitoring or decision-making.
  • Provide visibility into how data is being used.
  • Monitor for and prevent inappropriate uses.

Ensuring strong privacy safeguards will help maintain trust in AI-powered workplaces.

Issue 3: Regulatory compliance

The Canadian legal landscape for AI remains in flux. While Canada’s Artificial Intelligence and Data Act (AIDA) failed to pass into law, employers must still navigate existing legal frameworks that apply to AI implementation. These include industry specific regulations and human rights-related guidance. For instance, employers in Ontario should consider carrying out a Human Rights Impact Assessment following guidance from the Ontario Human Rights Commission.

Issue 4: Workforce disruption and strategic adaptation

Successfully integrating AI agents into the workforce starts with identifying the right applications. Employers should treat AI agents as new employees - ones that need to be onboarded, trained, and assigned the right tasks.

Rather than replacing human workers, AI should be leveraged to handle repetitive, tedious tasks, allowing employees to focus on higher-value, strategic, and creative work. This shift will require organizations to rethink job roles and workforce planning. Employers should also be mindful of any material changes to employee’s duties to avoid the risk of constructive dismissal claims while adopting these new technologies.

Key trends to watch include:

  • The rise of AI-augmented roles: Many jobs won’t disappear but will evolve to incorporate AI collaboration. Employees will need to adapt to working alongside AI tools that handle repetitive or analytical tasks.
  • Increased demand for AI literacy: Understanding how AI functions will become a core competency for many roles. Employers should invest in AI training programs to upskill workers.
  • Rapid skills obsolescence: Some traditional job functions may decline in relevance, particularly in data processing, administrative support, and customer service. Employers should prepare for reskilling initiatives.
  • Emphasis on human-centric skills: As AI takes over routine tasks, uniquely human abilities - such as emotional intelligence, ethical reasoning, and complex problem-solving - will become more valuable than ever.

Organizations that treat workforce adaptation as a strategic imperative rather than a reactive measure will be better positioned for long-term success.

Issue 5: Performance management in the AI era

AI agents also introduce new considerations for performance measurement. While AI-driven analytics can provide unprecedented insights into employee productivity, employers must balance efficiency with fairness and employee trust.

Recommended approaches include:

  • Assessment of need: AI should not be used for intrusive surveillance. Data collection should be proportional, ethical, and compliant with privacy regulations. Organizations should define the exact purposes for monitoring. Where employee privacy laws apply, employee monitoring may not be permitted depending on the context.
  • Setting clear boundaries for AI monitoring: Employees should know how AI is being used in monitoring productivity or behaviour. Policies should be established to define how, why and for what purposes employees are monitored. This is explicitly required under privacy laws in British Columbia, Alberta and Quebec, and employment standards in Ontario.
  • Blending quantitative and qualitative metrics: While AI can analyze data points like output and efficiency, human oversight is essential to capture contextual and interpersonal aspects of performance.

By adopting these principles, employers can ensure that AI enhances, rather than undermines, workplace trust.

Issue 6: Environmental impact

The environmental impact of AI is an emerging concern. AI systems, particularly those using large-scale deep learning models, require substantial computing power, leading to significant energy consumption and carbon emissions. Employers should consider:

  • Investing in energy-efficient AI models.
  • Partnering with cloud providers that use renewable energy sources.
  • Optimizing AI processes to reduce unnecessary computation.

Sustainable AI practices will help align business operations with environmental responsibility.

Ethical, transparent AI agents

AI agents will transform workplaces, bringing both opportunities and challenges. Employers must balance innovation with responsibility, ensuring AI is used ethically and transparently. AI agents should complement human skills, not replace them, helping employees focus on creativity, problem-solving, and strategy.

To succeed, companies should invest in AI training, maintain open communication with employees, and establish clear guidelines for AI use. A proactive approach will allow organizations to harness AI’s benefits while addressing concerns around privacy, bias, and job security.

By integrating AI agents thoughtfully, businesses can create a workplace that is efficient, fair, and future-ready.

Robbie Grant is an associate with McMillan LLP in Toronto specializing in privacy and data security. Kristen Shaw is an associate with McMillan LLP in Toronto specializing in employment and labour relations.