Employers told to establish policies on generative AI amid growing use

New Zealand employers also told to be 'AI literate'

Employers told to establish policies on generative AI amid growing use

Businesses in New Zealand are being urged to come up with policies on generative AI as more employees utilise the emerging technology in workplaces.

The encouragement comes after a recent survey from Perceptive, commissioned by Kordia, revealed that only 12% of organisations have policies in place for AI.

Alastair Miller, Principal Consultant at Aura Information Security, said the findings indicate a "room for improvement" for employers in New Zealand.

"Every business should look to create an AI policy that outlines how the company should strategically use AI and provide guidelines for employee usage," Miller said in a statement.

"Like any new technology, rather than blindly adopting it, it's worth defining the value or outcome you want to achieve – that way you can implement it effectively and safely into your organisation."

Be 'AI literate' amid risks

Meanwhile, older business leaders are also encouraged to be more "AI Literate" as the report found that generative AI tools are popular among Gen Z staff.

According to the report, 53% of the respondents said they had experience using generative AI tools, such as ChatGPT, with more than a quarter confessing they used it for work.

"Gen Z are already widely using this technology, so older generations of business leaders need to upskill their knowledge and become 'AI Literate,' to ensure younger members of the workforce understand what acceptable usage is, and the risks if generative AI is used incorrectly," Miller said.

The consultant in his remarks referred to the risks on privacy and cybersecurity that are associated with growing AI use. This is because only one in five respondents of the report were aware of the said dangers that come with generative AI.

According to Miller, information related to finances as well as commercially sensitive data should never be exposed to public AI tools, such as ChatGPT. Information belonging to customers, as well as personal data like health records, credentials, and contact details shouldn't be entered into public AI as well.

"Once data is entered into a public AI tool, it becomes part of a pool of training data – and there's no telling who might be able to access it," Miller said. "That's why businesses need to ensure generative AI is used appropriately, and any sensitive data or personal information is kept away from these tools."

Privacy concerns and AI

But even the use of private AI tools would require caution, according to Miller.

"Even a private AI tool should go through a security assessment before you entrust your company data to it – this will ensure that there are protective controls in place, so you can reap any benefits without the repercussions," he said.

Previously, the Office of the Privacy Commissioner warned businesses on the potential consequences of using generative AI.

"Generative AI is covered by the Privacy Act 2020 and my Office will be working to ensure that is being complied with; we will investigate where appropriate," said Privacy Commissioner Michael Webster in a statement.