Strong majority of Canadians say use of AI should be regulated in Canada

9 in 10 Canadians call for ethical guidance in AI development, finds report

Strong majority of Canadians say use of AI should be regulated in Canada

Nearly all Canadians are looking for ground rules when it comes to the use and development of artificial intelligence (AI) in the country, according to a recent report.

Over 90% of Canadians say that AI development should be guided by ethical principles, reports TELUS. And this has become a diversity issue.

Nearly half of respondents believe that AI governance should include community consultation to ensure that diverse perspectives are considered and bias is minimized.

That’s because one in five Canadians have personal experience of discrimination from AI technology, including misrepresentation, stereotyping and reduced access to resources and opportunities.

Also, 61% of respondents identifying as LGBTQ2S+ fear that AI may be used against certain people and communities, and 42% of respondents who self-identified as part of a racialized group feel that AI is biased against themselves and their peers.

The majority of respondents believe that this regulation should be government-led. And two in three suggest that input is needed from professionals in data ethics, legal and academia, while fewer think it’s important to include those from impacted communities.

Eight in 10 (80%) respondents aged 12-17 expect their generation will have to fix problems left behind by the current usage of AI.

Many leaders and employees across the world don't think their organizations will implement AI responsibly at work, according to a previous Workday report.

Where should AI be used?

Also, the use of AI is not acceptable in all fields, according to TELUS’s survey of nearly 5,000 Canadians.

Respondents believe AI use is acceptable in these fields, but on different levels:

  • high-stakes healthcare (75%)
  • internet applications (54%)
  • education and research (52%)
  • online shopping (51%)
  • personal banking (32%)
  • social media (30%)

However, “when given an example of a healthcare provider using AI to detect potentially cancerous cells, more than 75% of respondents agreed that human oversight would be necessary,” read part of the TELUS AI report: The power of perspectives in Canada. “Similar results were seen across examples of other highstakes use cases in healthcare, as well as crime identification and identity theft.”

Many Canadian IT firms are tapping into AI capabilities more and more, and even legal professionals believe that AI could assist with the automation and simplification of time-consuming and error prone manual processes, according to separate reports from IBM and OpenText.

And over half of Canadian professionals are using AI with no workplace policies, according to a previous report from Salesforce.

However, “new capabilities and ultimate value of AI” depend on improved education and access, “which, if continued, will create conditions for better understanding and equity around the technology,” said TELUS in its report.

Also, “It’s essential that AI doesn’t become a tool for amplifying historical inequities or further polarizing diverse groups,” it said. “Together, we can collaboratively design and implement an ethical and accessible AI ecosystem that prioritizes human values.”

Governments have a crucial role to play in all of these, said Uzair Anwar, an MBA candidate at Regent University, via LinkedIn.

“The role of governments in regulating AI development is crucial to ensure ethical considerations and strike a balance between innovation and societal safety. Ethical considerations in AI development regulation are essential to address concerns such as privacy, bias, and accountability. 

“Governments should establish clear guidelines and regulations that promote transparency, fairness, and accountability in AI systems. By doing so, they can prevent the misuse of AI technology and protect individuals' rights.”