Generative AI is being used by NZ employers – and HR has a key role to play in protecting employers, says academic
Roughly a quarter of New Zealand business are using generative AI tools like ChatGPT in their work.
However, only 12% of organisations have policies in place for AI, according to a survey from Perceptive, commissioned by Kordia.
The risk in terms of privacy is significant, says Dr Alex Richter of Victoria University, but HR, as a strategic leader during a time of significant change, has a key role to play not only in helping to protect organisations but in shaping the way AI’s adoption can be an advantage, he says.
“I think HR, in some cases, need to embrace there’s a change happening and determine how can they be active parts of designing this, and how we will work with AI in the future,” says Richter, professor at the Wellington School of Business and Government.
As part of his role, Richter teaches a course in digital transformation to executive MBA students. When discussing generative AI, it needs to be established how AI can fit with the values of Aotearoa.
“We shouldn’t be just trying to simulate what another country is doing though,” says Richter, which he says is tending to happen. “We should be confident that we have really good basics in terms of values and community, and then come up with our own approach based on how we want society to look in the future. Only if this is clear can we determine how AI can help with that.”
As a microcosm of society, an organisation with strong values and strategies around how to achieve goals is well placed to help lead the way in terms of the footprint this technology creates, he says.
“As an organisation, as a strategic HR leader, we need to think, ‘What roles do our employees play in all this and how do we want to work in the future?’ The people – whose feedback have created the values - are arguably the most important resource in the whole company, so how can we help them to support their wellbeing and health and satisfaction and also productivity? HR is absolutely key to this.”
In addition, HR has a valuable role to play regarding development of training in the technology, which is essential and time-critical, says Richter.
“With the use of generative AI, privacy for a start is a very significant issue, as is security. Especially for companies that have intellectual property and data that they wouldn't want to be shared publicly, it is risky.
“There are very inviting platforms out there but, ideally, they shouldn't use them to analyse data that’s confidential as, most likely, the server would be in another country. So there's always the risk if somebody uploads a confidential document, the information could end up in the public domain.”
In the same way email accounts can be hacked and others access information, organisations should make the same considerations for the use of ChatGPT, says Richter.
A number of larger organisations overseas have incorporated similar dedicated generative AI into their core infrastructure, rather than use software like ChatGPT, he says.
“Open AI has also recently started offering safer versions, but again it comes back to individual employees who might not be aware of this and then use ChatGPT instead.”
One of the biggest challenges in this technology, says Richter, is the speed at which its capabilities are developing, which gives even more pressing urgency to addressing it.
“It’s most likely already happening in your company,” he says. “You need to be aware of the risks and how you can address those.”
As a result, it’s advisable to upskill as soon as possible on a company level and on an individual level, says Richter.
“That means, first of all, making employees aware of what’s acceptable - specifically of the absolute don'ts. Make clear it is a risk for the company.
“People need to comprehend the full picture. If you’re uploading to ChatGPT you’ve got to think about what would happen if that information was available to others and what would the ramifications be of that? In some cases, it could be critical information or personal details that should never get out there.”