Key risks for businesses include confidential information, false information
The Privacy Commissioner has released guidance on his office's expectations regarding the use of generative artificial intelligence (AI) by businesses that are subject to the Privacy Act 2020 (Privacy Act). While it's acknowledged that the pace at which generative AI is changing and developing means that the guidance will need to be subject to ongoing review and amendment, it provides helpful practical advice to New Zealand businesses regarding key data protection issues to consider when using generative AI.
What is generative AI?
Generative AI is a type of AI that can be used to create content such as code, images and text. While generative AI has been around for almost a decade, it has become increasingly prevalent and powerful in the last 12 months due to the release of various AI generative products directly to consumers, such as OpenAI's ChatGPT, Google's Bard and Microsoft's Bing.
New Zealand’s demand for AI skills lags worldwide trends but will likely increase over the next few years, according to a study.
Key takeaways
The guidance highlights a number of key risks for businesses in relation to the use of generative AI:
- Generative AI typically requires the user to input data for that AI tool to operate. The inputted data may be used for a number of reasons, including to provide the user's requested response and to train the generative AI tool for future requests (including requests from other users). This presents risk for businesses to the extent that they share confidential information (including personal information) with an AI tool that may not be terribly secure and/or may not have appropriate data and privacy protections in place.
- In addition to potentially perpetuating bias and discrimination, generative AI tools can produce "confident errors of fact" and may not be accurate or capable of being relied upon. This risk was recently emphasised by the New Zealand Law Society who cautioned the use of ChatGPT due to ChatGPT's fabrication of case citations and a recent case in the US that highlights the risks for practitioners relying on ChatGPT.
- Generative AI tools may not enable businesses to satisfy their obligations under the Privacy Act in relation to access and correction of personal information.
In light of the risks above, the Privacy Commissioner's guidance sets out practical steps for New Zealand businesses to take when using generative AI tools for their business operations, including:
- Reviewing whether the use of a generative AI tool is necessary and proportionate, or whether an alternative approach could be taken (for example, an approach that may provide greater protections in relation to the use of personal information).
- Conducting a privacy impact assessment (or an Algorithmic Impact Assessment) to help identify and mitigate privacy risks. This should involve undertaking due diligence in relation to the relevant AI tool's use of data and seeking feedback from impacted communities and groups, including Māori.
- Being transparent and ensuring that customers and clients are informed about how their personal information will be used and how potential privacy risks are being addressed. Importantly, this needs to be explained in a way that is easy to understand (particularly when collecting information from children).
- Having a human review the outputs of any generative AI tool before those outputs are actually used (for example before decisions are made that will impact individuals) to help mitigate the risks associated with inaccurate or biased information.
- Being careful about what data is shared with generative AI tools. Businesses should only share confidential or personal information with an AI tool where there is an express commitment from the relevant AI provider that this information will not be retained or disclosed by that provider. Alternatively, businesses should ensure that any confidential or personal information is removed from data before that data is uploaded to the relevant AI tool.
Allan Yeoman is a partner in Buddle Findlay’s Technology, Media and Telecommunications team in Auckland. Amy Ryburn is a partner specialising in commercial law at Buddle Findlay in Wellington. Alex Chapman is a senior associate specialising in commercial contracting at Buddle Findlay in Wellington.
A report says 300 million jobs are at risk from new tech such as AI.