Back to all posts

ChatGPT in Courtroom A Risky Legal Aid

2025-06-01Gaby Del Valle8 minutes read
Legal Tech
Artificial Intelligence
Legal Ethics

It seems every few weeks a new headline emerges about a lawyer encountering legal trouble for submitting court documents filled with what one judge termed “bogus AI-generated research.” While the specific circumstances differ, the core issue remains consistent: an attorney employs a large language model (LLM) like ChatGPT for legal research or even drafting, the LLM invents non-existent cases (a phenomenon known as “hallucination”), and the lawyer remains unaware until a judge or opposing counsel highlights the error. In some instances, such as an aviation lawsuit in 2023, attorneys faced fines for these AI-induced fabrications. So, why does this practice continue?

Why Lawyers Persist with AI Despite Risks

The primary reasons appear to be tight deadlines and the pervasive integration of AI across professions. Legal research platforms like LexisNexis and Westlaw now incorporate AI functionalities. For lawyers handling extensive caseloads, AI can appear as a remarkably efficient assistant. While most lawyers may not use ChatGPT to draft entire filings, its use, along with other LLMs, for research is growing. However, many of these legal professionals, similar to the general public, do not fully grasp what LLMs are or their operational mechanisms. An attorney sanctioned in 2023 confessed he perceived ChatGPT as a “super search engine.” It took the submission of a filing with fabricated citations to understand it functions more like a sophisticated random-phrase generator, capable of providing either accurate information or convincingly phrased falsehoods.

Are AI Blunders the Exception Not the Rule

Andrew Perlman, Dean of Suffolk University Law School, posits that many lawyers utilize AI tools without issues, and those caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman stated. He also noted that legal databases and research systems like Westlaw are integrating AI services.

Indeed, a 2024 survey by Thomson Reuters found that 63 percent of lawyers reported using AI previously, and 12 percent use it regularly. Respondents indicated using AI for summarizing case law and researching “case law, statutes, forms or sample language for orders.” The surveyed attorneys view AI as a time-saving instrument, with half stating that “exploring the potential for implementing AI” at work is their top priority. One respondent commented, “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents.”

However, as numerous recent examples demonstrate, documents produced by AI are not always accurate, and sometimes, not real at all.

When AI Goes Wrong High Profile Hallucinations

In a notable recent case, attorneys for journalist Tim Burke, arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss based on First Amendment grounds. Judge Kathryn Kimball Mizelle of Florida’s middle district ordered the motion stricken after discovering “significant misrepresentations and misquotations of supposedly pertinent case law and history.” The judge identified nine hallucinations in the document, according to the Tampa Bay Times.

Judge Mizelle allowed Burke’s lawyers, Mark Rasch and Michael Maddux, to submit a revised motion. Rasch, in a separate filing, assumed “sole and exclusive responsibility for these errors,” stating he used ChatGPT Pro's “deep research” feature, which The Verge has previously tested with varied outcomes, alongside Westlaw’s AI feature.

Rasch is not unique. Lawyers representing Anthropic recently admitted to using the company’s Claude AI for an expert witness declaration in a copyright lawsuit by music publishers. This filing contained a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock conceded he used ChatGPT to organize citations for a declaration supporting a Minnesota deepfake law, which led to “two citation errors, popularly referred to as ‘hallucinations,’” and incorrect author listings for another citation.

The View from the Bench Judges Confront AI Errors

These documents significantly matter, especially to judges. In a recent California case against State Farm, Judge Michael Wilner was initially swayed by a brief's arguments, only to discover the cited case law was entirely fabricated. “I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Wilner wrote.

Perlman suggests several lower-risk applications for generative AI in legal work. These include finding information within large volumes of discovery documents, reviewing briefs or filings, and brainstorming potential arguments or counterarguments. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman explained.

The Human Factor Overcoming Time Crunches and Overtrust

Nevertheless, lawyers relying on AI for legal research and writing must diligently check the output, Perlman cautioned. A contributing factor is that attorneys are often short on time, an issue he notes predates LLMs. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.” (Though, he added, the cases usually did exist.)

A more subtle issue is the over-reliance attorneys—and other AI users—place on AI-generated content. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman observed.

AI as a Junior Colleague A Practical Approach

Alexander Kolodin, an election lawyer and Republican state representative in Arizona, treats ChatGPT as a junior-level associate. He has also used ChatGPT to help draft legislation. In 2024, he incorporated AI text into a deepfakes bill, having the LLM provide the “baseline definition” of deepfakes, after which “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian. Kolodin mentioned he “may have” discussed his ChatGPT use with the bill’s main Democratic cosponsor but otherwise intended it as an “Easter egg.” The bill became law.

Kolodin—who was sanctioned by the Arizona State Bar in 2020 for involvement in lawsuits challenging the 2020 election results—also uses ChatGPT for initial drafts of amendments and for legal research. To mitigate hallucinations, he simply verifies the citations.

“You don’t just typically send out a junior associate’s work product without checking the citations,” Kolodin remarked. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”

Kolodin uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool, noting that, in his experience, LexisNexis has a higher hallucination rate than ChatGPT, which he believes has “gone down substantially over the past year.”

Formal Guidance The ABA Weighs In on AI Ethics

AI use among lawyers has become so common that in 2024, the American Bar Association (ABA) issued its first guidance on attorneys’ use of LLMs and other AI tools.

The ABA opinion states that lawyers using AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI. It advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use—essentially, not to assume an LLM is a “super search engine.” Attorneys should also assess confidentiality risks when inputting case-related information into LLMs and consider informing clients about their use of such tools.

The Future of AI in Law Revolution or Risk

Perlman is optimistic about lawyers’ AI use. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he predicted. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”

Others, including judges who have sanctioned lawyers for AI-generated errors, remain more skeptical. “Even with recent advances,” Judge Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.