The High Cost of AI Mistakes in Law
A California attorney is facing a hefty $10,000 fine for a significant professional misstep: filing a state court appeal that was riddled with fake quotations produced by the AI tool ChatGPT. This penalty is believed to be the largest of its kind issued by a California court for AI-related fabrications and serves as a stark warning to the legal community.
A Stern Warning From the Court
The court's scathing opinion revealed that an astonishing 21 out of 23 case quotes in the attorney's opening brief were entirely fabricated. The judges noted that this is not an isolated incident, with courts across the country confronting similar issues of attorneys citing fake legal authority.
“We therefore publish this opinion as a warning,” the court stated. “Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations— whether provided by generative AI or any other source—that the attorney responsible for submitting the pleading has not personally read and verified.”
This case has amplified the urgency for legal authorities to regulate the use of AI. California’s Judicial Council recently issued guidelines requiring courts to establish an AI use policy. Concurrently, the California Bar Association is reviewing its code of conduct at the request of the state Supreme Court to address the challenges posed by artificial intelligence.
The Lawyer's Perspective on AI Use
The Los Angeles-based attorney, Amir Mostafavi, admitted to the court that he did not review the text generated by the AI model before submitting the appeal. This occurred months after ChatGPT was marketed as being capable of passing the bar exam. A three-judge panel fined him for multiple violations, including filing a frivolous appeal, citing nonexistent cases, and wasting judicial resources.
Mostafavi explained that he wrote the appeal himself and then used ChatGPT to try and improve it, unaware that the tool would invent case citations. While he acknowledges the danger, he believes it's unrealistic for lawyers to abandon AI. He suggests that until AI systems stop “hallucinating” fake information, legal professionals must proceed with extreme caution.
“In the meantime we’re going to have some victims, we’re going to have some damages, we’re going to have some wreckages,” he said. “I hope this example will help others not fall into the hole. I’m paying the price.”
A Troubling and Growing Trend
Experts confirm that this is a rapidly escalating problem. Damien Charlotin, who teaches about AI and law and tracks AI fabrication cases, notes that Mostafavi's fine is one of the highest ever issued for this offense. He has seen the number of such cases jump from a few per month to several per day, warning that large language models are more likely to invent information when a legal argument is difficult to support.
“The harder your legal argument is to make, the more the model will tend to hallucinate, because they will try to please you,” Charlotin explained.
A May 2024 analysis from Stanford University's RegLab found that while most lawyers plan to use generative AI, some models can produce hallucinations in one out of every three queries. Another tracker project has identified over 600 cases nationwide where lawyers cited nonexistent legal authority due to AI use.
Jenny Wondracek, who leads the tracker, says the problem is worsened by a lack of basic understanding among many lawyers that these tools can simply make things up. The issue isn't limited to attorneys; she has also documented instances of judges inadvertently citing fake legal authority in their decisions.
Scrambling for Solutions and Regulations
As California grapples with this new challenge, experts suggest looking at approaches from other states. These include temporary suspensions, mandatory ethics courses on AI, or even requiring sanctioned attorneys to teach law students how to avoid their mistakes.
Mark McKenna of the UCLA Institute of Technology, Law & Policy, supports fines like the one against Mostafavi, calling the blind use of AI “an abdication of your responsibility.” He predicts the problem “will get worse before it gets better,” as law schools and firms rush to adopt AI without fully understanding its pitfalls.
UCLA School of Law professor Andrew Selbst echoed this sentiment, noting that recent graduates and students are being pressured to use AI to stay competitive. This pressure is felt across professions, with educators reporting similar demands.
“This is getting shoved down all our throats,” Selbst said. “It’s being pushed in firms and schools and a lot of places and we have not yet grappled with the consequences of that.”