News

Pratfalls With ChatGPT

by | Aug 4, 2023 | Business Litigation Update

August 2023 Business Litigation Update

Artificial Intelligence (AI) – particularly ChatGPT – continues to be in the news as it assumes an ever larger role in all types of commercial and personal activities.  Many businesses and individuals have been captivated by the allure of AI, with its promise of unparalleled efficiency and accuracy, plus the ability to manage vast amounts of data.  ChatGPT is also being regularly used to generate ideas, create content, respond to emails, produce computer code, write resumes and cover letters, and design presentations, according to a recent article on zdnet.com.

But, danger lurks.

Two recent events involving lawyers – one in New York and one in Texas – serve as good reminders that the use of AI involves risks that, if not mitigated, can be costly.

In June, New York federal district court judge Hon. Kevin Castel sanctioned two lawyers who had previously submitted a legal brief written by ChatGPT.  The brief included citations to non-existent court opinions and fake quotes from the made up opinions!  Most surprisingly, according to Judge Castel’s ruling, the lawyers continued to stand behind the fake cases even after they were pointed out both by opposing counsel and the judge.  This was a bad strategy.

Judge Castel stated that he might not have sanctioned the attorneys if they had come “clean” about using ChatGPT, noting that “[t]echnological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance.”  But the judge determined the lawyers had exhibited “bad faith” because they made false and misleading statements about their legal brief and its contents after questions were raised about the fake cases.  “[E]xisting rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings,” the judge ruled.

Judge Castel ordered each lawyer to pay $5,000 in sanctions.  He also ordered them to notify the judges “whose names were wrongfully invoked” of the sanctions imposed.

Meanwhile, in July, a Texas appellate court in Waco affirmed the denial of a writ of habeas corpus because, in part, the appellate lawyer seeking the writ cited to three non-existent cases in his brief.  As noted by the appellate court:

“None of the three published cases cited actually exist ….  Each citation provides the reader a jump-cite into the body of a different case that has nothing to do with the propositions cited by [the lawyer].  Two of the citations take the reader to cases from Missouri….  [E]ven Texas cases with the same names as those cited by [the lawyer] do not correspond with the propositions relied upon….  These deficiencies, although brought to the Court’s and to [the lawyer’s] attention … were neither contested nor corrected ….”

Somewhat surprisingly, the Texas appellate court refrained from imposing sanctions against the lawyer, noting that “[b]ecause we have no information regarding why the briefing is illogical, and because we have addressed the issue raised on appeal, we resist the temptation to issue a show cause order as a New York federal district judge … or report the attorney to the State Bar of Texas for a potential investigation for a violation of the State Bar rules.”  Plainly, subsequent lawyers caught using ChatGPT without review may not be given the same latitude.

AI has transformed the way we conduct business, write computer code, create art, and, for some, practice law.  But its adoption requires careful consideration of its limitations and risks.  While this seems obvious, the two examples above show that not everyone is getting the message.  As AI continues to evolve, lawyers – and all professionals, businesses, and individuals using it – must strike a balance between embracing the new technology while preserving necessary oversight, analysis,  and review.