Privacy concerns showcased in Australian ChatGPT investigation similar for NZ organisations
A recent investigation by the Victorian privacy regulator has resulted in a ban on the use of ChatGPT in a government department. The findings highlight that while generative artificial intelligence (GenAI) tools offer significant benefits in the workplace, they also carry risks without robust policies, training and education on their use.
The investigation by the Office of the Victorian Information Commissioner (OVIC) centred on a Protection Application Report (PA Report) prepared by a Child Protection worker, employed by the Department of Families, Fairness and Housing (DFFH). The report was submitted to the Victorian Children’s Court in proceedings against parents charged with sexual offending.
After the report was submitted, a DFFH legal representative reviewed it and noted that its language was overly sophisticated, complex and descriptive. More concerningly, the report included inaccurate information. For example, there were inconsistent references to a child’s doll:
As a result, the report inaccurately presented what should be an indicator of risk to the child as an indication of positive caregiving capacity of the parents. This downplayed the severity of the actual or potential harm to the child.
The DFFH representative reported their concerns to its Child Protection division, which investigated. It concluded that the report’s author had used the free version of ChatGPT to draft the report. DFFH withdrew the PA Report from the Court and notified OVIC about a potential privacy breach.
The concerns were twofold.
The first related to the release of sensitive personal information to ChatGPT. When information is inputted into the free version of ChatGPT, it is disclosed to the tool’s owner, OpenAI (an offshore company). OpenAI can then decide how to use the information, for example by training ChatGPT or sharing the information with third parties. ChatGPT also offers paid versions of the software, which offers greater privacy protections over data inputted by users.
In this case a significant amount of personal and delicate information was entered into ChatGPT without permission. As a result, it was released to OpenAI and it became outside the control of the Department.
The second concern was over the use in the report of content generated by ChatGPT. It contained inaccurate personal information, in this case downplaying the risks to the child in the case.
OVIC launched an investigation under the Privacy and Data Protection Act 2014 (Vic), which includes similar provisions to New Zealand’s Privacy Act in relation to the collection , use and handling of personal information. The OVIC concluded that, in using ChatGPT to draft the PA Report, DFFH had breached the following Information Privacy Principles (IPPs):
OVIC was critical of the policies and protections that DFFH had in place to manage the risks of GenAI tools, finding they were not sufficient to ensure compliance with the IPPs. It issued Compliance Notices, requiring, amongst other things, DFFH to block Child Protection staff from accessing various GenAI tools including ChatGPT. It also required DFFH to regularly scan for similar GenAI tools and block access to them by Child Protection staff.
Under New Zealand’s Privacy Act 2020, the same or similar outcome would likely have resulted in similar privacy breaches. The IPPs breached by using ChatGPT have the following equivalents under New Zealand’s Privacy Act:
Like OVIC, the New Zealand Office of the Privacy Commissioner has the power to issue Compliance Notices.
The key takeaway for New Zealand organisations is that robust, well thought-out policies, education and training on GenAI use are essential. Lip-service is not enough, as OVIC’s criticisms of the measures in place at the time the PA Report was drafted make clear:
OVIC was similarly underwhelmed by the additional measures adopted by DFFH after it reported the privacy breach:
New Zealand organisations can derive high-level guidance about how to use GenAI in accordance with the Privacy Act from the OPC’s publication Artificial Intelligence and the IPPs and the government’s Generative AI Advice for the Public Sector.
Anita Birkinshaw is a Special Counsel at Simpson Grierson in Auckland, specialising in complex commercial disputes including intellectual property, privacy, and data protection and cyber-security. Jania Baigent is a partner at Simpson Grierson in Auckland, specialising in insurance, product liability, privacy, and media law. Karen Ngan is a partner at Simpson Grierson in Auckland, specialising in technology, telecommunications, and data protection. Michelle Dunlop is a Senior Associate at Simpson Grierson in Auckland, specialising in commercial technology, data protection, and cyber-security.