'Overarching checklist' also beneficial for businesses developing AI policies
Two researchers in New Zealand have led the development of a new analytical framework on AI use amid growing calls across the world for the establishment of policies governing the technology.
The two researchers were Sir Peter Gluckman, director of think tank Koi Tū: The Centre for Informed Futures at Waipapa Taumata Rau, University of Auckland; and Hema Sridhar, Koi Tū strategic adviser for technological futures.
The framework, released under the International Science Council, takes the form of an "overarching checklist" that could be used by both government and non-governmental institutions.
"The framework identifies and explores the potential of a technology such as AI and its derivatives through a wide lens that encompasses human and societal wellbeing, as well as external factors, such as economics, politics, the environment and security," the paper said.
Sridhar said the framework will be useful for all policymakers, decision-makers, and even the private sector.
"It's useful for companies too because they should be thinking now about what they need to address and how to get social licence to use their technologies," Sridhar said in a statement.
The framework comes as experts call on employers to come up with the development of AI policies as more employers utilise them at work.
In New Zealand, a recent survey from Perceptive revealed that only 12% of organisations have policies in place for AI.
"Every business should look to create an AI policy that outlines how the company should strategically use AI and provide guidelines for employee usage," said Alastair Miller, Principal Consultant at Aura Information Security, in a statement.
According to the paper, the analytical framework provides a groundwork for "impact assessments," which is among the requirements among European Union organisations deploying AI.
"The EU AI Act requires organisations that provide AI tools or adopt AI in their processes to undertake an impact assessment to identify the risk of their initiatives and apply an appropriate risk management approach. The framework presented here could be used as a foundation for this," the paper said.
It could also be used to enhance the ethical principles needed to guide and govern the use of AI.
"The framework can do this by providing a flexible foundation upon which trustworthy systems can be developed and ensuring the lawful, ethical, robust, and responsible use of the technology," the paper said.
Amid the widespread use of AI, Gluckman noted that this technology can either "create a nirvana" or "destroy the world."
"The reality is in the history of humankind, all technologies get used. They always get used for good purposes and bad purposes," he said in a statement.
"But having this sort of framework allows us to have the discussions about how to take any new technology and make it most likely that the good and beneficial purposes will be supported and the negative will be prevented."