ChatGPT-generated report leads to GenAI ban at DFFH

Order comes after employee entered sensitive information on ChatGPT

ChatGPT-generated report leads to GenAI ban at DFFH

The Department of Families, Fairness, and Housing (DFFH) in Victoria has been ordered to ban its employees from using generative AI after an employee used ChatGPT to draft a Protection Application (PA) report.

The Office of the Victorian Information Commissioner (OVIC) ordered the department to block access to several generative AI platforms, including ChatGPT, Meta AI, Gemini, and Copilot, for two years.

The issue stems from an investigation carried out by OVIC after a child protection worker used ChatGPT in December 2023 to draft a court report, entering sensitive personal information, including the name of an at-risk child.

Use of ChatGPT

In a report released on Tuesday, OVIC found that the employee had indeed used ChatGPT in drafting the PA report and input personal information in doing so.

Entering the personal and sensitive information about the mother, father, carer, and child into ChatGPT would mean disclosing the information to OpenAI, according to the report.

"This unauthorised disclosure released the information from the control of DFFH with OpenAI being able to determine any further uses or disclosures of it," it stated.

There were also a "wide range of indicators of ChatGPT usage throughout the report."

"These included the use of language not commensurate with employee training and Child Protection guidelines, as well as inappropriate sentence structure," the OVIC said.

Another inaccuracy in the document was the report's reference to a doll, which had been flagged in child protection reports as potentially being used by the child's father for sexual purposes.

However, the worker's ChatGPT-generated report described the doll as an "age-appropriate toy" and part of the parents' efforts to support the child's development.

"The use of ChatGPT therefore had the effect of downplaying the severity of the actual or potential harm to the child, with the potential to impact decisions about the child’s care," the OVIC said.

"Fortunately, the deficiencies in the report did not ultimately change the decision making of either Child Protection or the Court in relation to the child."

Broader ChatGPT usage

The OVIC further found that 100 cases handled by the employee over one year may have been drafted by ChatGPT after seeing "indicators."

Nearly 900 employees of DFFH also accessed the ChatGPT website between July and December 2023.

According to OVIC, the department breached the Information Privacy Principles for not having policies in place on when and how generative AI tools should be used.

"There was no evidence that, by the time of the PA Report incident, DFFH had made any other attempts to educate or train staff about how GenAI tools work, and the privacy risks associated with them," the OVIC's findings read.

"Additionally, there were no departmental rules in place about when and how these tools should or should not be used. Nor were there any technical controls to restrict access to tools like ChatGPT."

The GenAI ban for DFFH's workforce begins on November 5. The department said it "accepts the finding" that it breached IPPs and vowed to address the orders from the OVIC.

Recent articles & video

'I wanted to own the local cake shop… now I've got my heavy rigid driver's licence'

How right to disconnect laws will test workplace boundaries

ChatGPT-generated report leads to GenAI ban at DFFH

Job postings mentioning remote work hit 14.3%: Indeed

Most Read Articles

Psychosocial risk: ways to defuse potential legal challenges

When the whistle blows: HR response vital when handling serious complaints, lawyer says

Death of investment banker reminder of risks of employee burnout