Framework focuses on 'FASTER' principles
The federal government has released preliminary guidance to federal institutions on their use of generative artificial intelligence (AI) tools for work.
The Directive on Automated Decision-Making applies to automated systems, including those that rely on AI, used to influence or make administrative decisions.
"These guidelines that we have issued will make sure that employees are aware of not using private or secret information, making sure that content is factual, making sure that we are transparent about its use, and making sure that we're complying with laws and policies as well," said Anita Anand, Treasury Board president, said in a CBC report.
To maintain public trust and ensure the responsible use of generative AI tools, federal institutions should align with the “FASTER” principles:
- Fair: ensure that content from these tools does not include or amplify biases and that it complies with human rights, accessibility, and procedural and substantive fairness obligations
- Accountable: take responsibility for the content generated by these tools. This includes making sure it is factual, legal, ethical, and compliant with the terms of use
- Secure: ensure that the infrastructure and tools are appropriate for the security classification of the information and that privacy and personal information are protected
- Transparent: identify content that has been produced using generative AI; notify users that they are interacting with an AI tool; document decisions and be able to provide explanations if tools are used to support decision-making
- Educated: learn about the strengths, limitations and responsible use of the tools; learn how to create effective prompts and to identify potential weaknesses in the outputs
- Relevant: make sure the use of generative AI tools supports user and organizational needs and contributes to improved outcomes for Canadians; identify appropriate tools for the task; AI tools aren’t the best choice in every situation
In June, Ontario's Information and Privacy Commissioner, Patricia Kosseim, called on the provincial government to put in place a “robust framework” to govern the public sector's use of AI technologies.
Issues and best practices for AI use
The federal government’s AI use framework also details some potential issues with the use of the technology at work, including:
- Some generative AI tools do not meet government information security requirements.
- Generated content may amplify biases or other harmful ideas that are dominant in the training data.
- Generated content may be inaccurate, incoherent or incomplete.
- Over-reliance on AI can unduly interfere with judgment, stifle creativity and erode workforce capabilities.
- Generative AI poses risks to human rights, privacy, intellectual property protection, and procedural fairness.
- People may not know that they are interacting with an AI system, or they may wrongly assume that AI is being used.
- The development and use of generative AI systems can have significant environmental costs.
The framework also include numerous best practices to address these issues, including:
- Don’t enter sensitive or personal information into any tools not managed by the federal government.
- Clearly indicate that you have used generative AI to develop content.
- Consider whether you need to use generative AI to meet user and organizational needs.
- Consult your institution’s legal services about the legal risks of deploying generative AI tools or using them in service delivery. The consultation could involve a review of the supplier’s terms of use, copyright policy, privacy policy and other legal documents.
- Clearly communicate when and how the federal government is using AI in interactions with the public.
- Use generative AI tools hosted in zero-emission data centres.
In April, Tesla and Twitter CEO Elon Musk and Apple co-founder Steve Wozniak, along with other 2,200 or so people, called for a six-month pause in developing systems more powerful than ChatGPT-4, citing risks to society and humanity. A month later, AI pioneer Geoffrey Hinton, the former Google executive dubbed as “the godfather of AI”, raised concern about the bad side of AI – saying that advancements around AI technology are pushing the world into “a period of huge uncertainty”.
‘Not about replacing employees’
“Artificial intelligence gives government an invaluable opportunity to improve services to the Canadians we serve,” said Jane Philpott, former Treasury Board president, previously said. “Canada’s leadership, in the field of artificial intellignce and our burgeoning AI industry are creating a powerful partnership to improve digital government for the betterment of all.”
Despite the federal government fully acknowledging the use of AI at work, the release of the framework is not a bid to eliminate jobs, Anand noted in the CBC report.
"This is not about replacing employees at all," Anand said. "This use of generative AI is as a tool to further the work of existing and future employees."
Meanwhile, Jennifer Carr, president of the Professional Institute of the Public Service of Canada (PIPSC), noted that the guidelines on AI use for those in the public service falls short of the demand.
"It only proposes that the government 'be careful' on how they use AI," Carr said in the same CBC report. "'Be careful' is a very subjective term. What we're really looking for is that there are strict regulations or guidelines, where there are go and no-go zones."
In Canada almost 20% of employers believe that AI is useful but won’t overtake traditional ways of working, according to a previous report.