AI Misuse In Law Sparks Judicial Alarm
Turns out Dickmas v Worzel (1876) wasn't the clincher ChatGPT promised.
Two lawyers who cited entirely fictional legal cases generated by Artificial Intelligence (AI) have narrowly avoided contempt of court proceedings. This situation has prompted senior judges to issue strong warnings about the responsible use of AI in the legal field.
AI Generated Falsehoods Rattle Legal System
Mr Justice Johnson and Dame Victoria Sharp, President of the King’s Bench Division, cautioned that the legal profession must maintain rigorous oversight on its use of tools like ChatGPT. Their concern is to prevent court proceedings from devolving into scenarios where legal professionals, or "bewigged human mouthpieces," inadvertently present AI-generated fabrications as facts.
The alarm was first sounded when it was discovered that the Haringey Law Centre and barrister Sarah Forey, of 3 Bolt Court Chambers, had used five invented cases to support their arguments in a judicial review.
The Haringey Case AI Misuse Unveiled
When the opposing solicitors pointed out they couldn't find the cited cases, Sunnelah Hussain, a lawyer from Haringey Law Centre, dismissed the issue as due to "cosmetic errors." She further questioned if the defendants' lawyers had raised the matter "to avoid undertaking really serious legal research." The judge in that case described her letter as "remarkable" and the entire situation as "appalling professional misbehaviour."
Ms. Forey initially denied deliberately using AI when the matter was escalated to the King’s Bench Division. She later conceded that she "may also have carried out searches on Google or Safari" and used AI-generated summaries of results "without realising what they were." However, she could not provide any evidence to support this claim and admitted this wasn't the first time she had been caught with non-existent case references. In April 2025 (note: likely a typo in original article, likely meant 2023 or 2024), a barrister covering for Forey found her documents contained references to several cases that did not exist. The judge in that earlier instance had written to Forey’s Head of Chambers, Farah Ramzan, suggesting a referral to the Bar Standards Board (BSB), but Forey and Ramzan managed to convince him it was unnecessary.
Supervisory Failures and Judicial Discretion
In considering whether Forey was in contempt of court for the Haringey case, the King’s Bench Division stated in its judgment that she had either "deliberately included fake citations in her written work" or used AI. In either scenario, "the threshold for initiating contempt proceedings is met."
Despite this, the court decided to spare her, noting she was an "extremely junior lawyer who was apparently operating outside her level of competence." The judgment also pointed to potential failings by her supervisors, stating, "There are questions raised as to potential failings on the part of those who had responsibility for training Ms Forey, for supervising her, for ‘signing off’ her pupillage, for allocating work to her, and for marketing her services." The court also acknowledged she had already faced public criticism and was referred to the BSB. However, it warned that this decision was not a precedent and confirmed her referral to the BSB.
A Second Lawyer Misled by AI Research
Mercy was also shown to another lawyer, Abid Hussain of Primus Solicitors. He had relied on numerous cases in court that were either "completely fictitious" or did not contain the passages he and his client claimed. Hussain informed the court that he had allowed his client to handle the initial drafting of documents. Unbeknownst to him, the client had used AI for this task. An extremely apologetic Hussain stated he was "horrified" and had subsequently withdrawn himself from all litigated matters after being misled by AI.
The court determined that despite his and Primus Solicitors’ "lamentable failure to comply with the basic requirement to check the accuracy of material that is put before the court," his ignorance of AI's involvement meant the threshold for contempt was not met.
Court Delivers Stark Warning on AI in Law
The judges utilized these proceedings to deliver a crucial warning: lawyers must understand the limitations of ChatGPT and similar fallible AI tools. Their judgment highlighted, "Artificial intelligence is a tool that carries with it risks as well as opportunities. Freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT, are not capable of conducting reliable legal research."
They further elaborated, "Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source."
The Imperative of Human Oversight for AI Tools
The judgment advised treating AI as one would a wayward junior lawyer. The responsibility of a lawyer using AI is "no different from the responsibility of a lawyer who relies on the work of a trainee solicitor or a pupil barrister." Consequently, "Its use must take place therefore with an appropriate degree of oversight."