Several cases submitted by counsel 'questionable' with 'bogus' quotes and citations
This article has been corrected to clarify that Steve Schwartz at Wilson Sonsini Goodrich & Rosati was not involved in this case. The Steven Schwartz involved in this case works for Levidow, Levidow & Oberman. HRD sincerely apologizes for the error.
In an "unprecedented" case, a lawyer who has been licensed in New York for over three decades recently apologized for presenting fake cases in court.
His source? The infamous ChatGPT.
Steven Schwartz, an associate in the New York office of Levidow, Levidow & Oberman, was handling the case of Roberto Mata, who was suing Avianca airlines after he sustained personal injuries while onboard one its carriers. Avianca moved to dismiss the case, while Mata and his representatives opposed this in court.
'Questionable' cases
In this move, however, Avianca's lawyers pointed out that the many of the cases submitted by Mata's counsel were "questionable."
The Clerk of the United States Court of Appeals for the Eleventh Circuit, in response to the Court's inquiry, confirmed that one of the cases cited by Schwartz - the Varghese v China South Airlines Ltd – is non-existent. Other decisions cited by Mata's counsel that also appear to be fake include:
- Shaboon v. Egyptair
- Petersen v. Iran Air
- Martinez v. Delta Airlines, Inc.
- Estate of Durden v. KLM Royal Dutch Airlines
- Miller v. United Airlines, Inc.
"The court is presented with an unprecedented circumstance," said United States District Judge Kevin Castel. "Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations."
Castel issued a show cause order on why Mata's counsel should not be sanctioned for its actions.
Latest News
ChatGPT use in legal field
In an affidavit, Schwartz explained that he used ChatGPT to "supplement the legal research" he carried out, citing the generative AI's growing use in law firms.
"It was in consultation with the generative artificial intelligence website ChatGPT that your affiant did locate and cite the following cases in the affirmation in opposition submitted," he said.
According to the lawyer, he has never used ChatGPT in the past for conducting legal research and was "unaware" that its generated content could be false.
"That is the fault of the affiant, in not confirming the sources provided by ChatGPT of the legal opinions it provided," Schwartz said. "That your affiant had no intent to deceive this court nor the defendant."
Schwartz added that he "greatly regrets" using generative AI to supplement legal research and has sworn to "never do so in the future without absolute verification of its authenticity."
The lawyer also clarified that his fellow attorney, Peter LoDuca, had "no role" in performing the legal research and was not informed that ChatGPT was used. LoDuca, in his own affidavit, also defended that he previously had no reason to doubt the authenticity and sincerity of Schwartz's research.
Risks of using AI
The case reflects the risks expressed by many regulators over the use of generative AI in the workplace. In New Zealand, the Office of the Privacy Commissioner has warned of the accuracy of information created by the generative AI.
This concern adds to growing list of issues employers have with generative AI, which includes security risks, after Samsung previously suffered from a confidential data leak after employees used AI-powered tools to resolve workplace problems.