A US lawyer’s decision to employ ChatGPT, an advanced language model, for the preparation of a crucial court filing has led to an astonishing turn of events.
The artificial intelligence programme generated fabricated cases and rulings, causing immense embarrassment to the attorney involved.
Steven Schwartz, a lawyer based in New York, issued an apology to a judge this week after realising that the brief he had submitted contained false information generated by the OpenAI chatbot.
“I simply had no idea that ChatGPT was capable of fabricating entire case citations or judicial opinions, especially in a manner that appeared authentic,” Schwartz wrote in a court filing.
The incident took place during the proceedings of a civil case at the Manhattan federal court, where Roberto Mata is suing the Colombian airline Avianca.
Mata alleges that he sustained an injury when a metal serving plate struck his leg during a flight from El Salvador to New York in August 2019.
After the airline’s lawyers asked the court to dismiss the case, Schwartz filed a response that claimed to cite more than half a dozen decisions to support why the litigation should proceed.
They included Petersen v. Iran Air, Varghese v. China Southern Airlines and Shaboon v. Egyptair.
The Varghese case even included dated internal citations and quotes.
However, there was one major problem: neither Avianca’s attorneys nor the presiding judge, P. Kevin Castel could find the cases.
Schwartz was compelled to acknowledge that ChatGPT had fabricated all the information provided in the response.
“The court is presented with an unprecedented circumstance,” judge Castel wrote last month.
“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” he added.
In response to the situation, the judge issued an order summoning Schwartz and his law partner to appear before the court, to face possible sanctions.
In a filing submitted on Tuesday (06) prior to the hearing, Schwartz expressed his sincere apologies to the court, acknowledging his profound regret for the significant error he had made.
He explained that his introduction to ChatGPT came from his college-educated children, and this incident marked the first occasion he had utilised the tool in his professional capacity.
“At the time that I performed the legal research in this case, I believed that ChatGPT was a reliable search engine. I now know that was incorrect,” he wrote.
Schwartz added that it “was never my intention to mislead the court.”
Since its launch in late last year, ChatGPT has gained worldwide attention for its remarkable capability to generate human-like content, ranging from essays and poems to engaging conversations based on minimal prompts.
At present, there has been no immediate response from OpenAI, the organisation behind ChatGPT, regarding Schwartz’s unfortunate incident.
The story initially surfaced in The New York Times.
Schwartz, along with his firm, Levidow, Levidow & Oberman, expressed their distress over the media coverage, highlighting the sense of being subjected to public ridicule.
In a statement, Schwartz acknowledged the profound embarrassment caused on both personal and professional fronts, recognising that the articles regarding the incident would remain accessible for years to come.
He further assured the court that the situation had served as a valuable lesson, vowing to never repeat such an error in the future.