Developer Offer
Try ImaginePro API with 50 Free Credits
Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.
Maryland Lawyer Sanctioned For Botched ChatGPT Legal Brief

Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images
The integration of artificial intelligence into professional fields is creating new and complex challenges, particularly within the American legal system. A recent case from a Maryland appellate court highlights the significant risks involved when AI tools like ChatGPT are used without proper oversight.
AI Hallucinations Enter the Courtroom
A family lawyer, representing a mother in a contentious custody battle, found himself in hot water after filing court briefs that were generated using ChatGPT. The issue came to light when it was discovered that the legal filings were filled with "AI hallucinations." As reported by The Daily Record, the document submitted by the lawyer contained numerous citations to case law that simply did not exist.
This incident is not an isolated one, echoing other high-profile legal blunders involving AI. Beyond the fabricated cases, the brief also included citations to real legal precedents that actively contradicted the arguments the lawyer was trying to make, further undermining his client's position.
The Lawyer's Defense and a Judge's Rebuke
In his defense, the attorney, Adam Hyman, attempted to deflect responsibility. He claimed he was not directly involved in the research and instead blamed a law clerk. According to Hyman, the clerk used ChatGPT to find legal citations and edit the brief. Hyman stated that the clerk was unaware of the risks of AI hallucinations—the tendency for AI models to confidently invent false information.
He further admitted that he did not personally vet the cases cited in the document and mentioned that he handles very little appellate work. This excuse did not sit well with the court. In a sharply worded official opinion, Maryland appellate Judge Kathryn Grill Graeff condemned the attorney's actions.
"It is unquestionably improper for an attorney to submit a brief with fake cases generated by AI," Judge Grill Graeff wrote. "A competent attorney reads the legal authority cited in court pleadings to make sure that they stand for the proposition for which they are cited."
Setting a Precedent for AI in Law
Judge Grill Graeff explained that while such a mistake might not typically warrant a formal opinion, she felt it was necessary to "address a problem that is recurring in courts around the country." The incident served as a crucial opportunity to set a precedent in Maryland regarding the use of AI in legal practice.
As a result, Hyman was required to take full responsibility for the flawed submission. Both the lawyer and the clerk have been ordered to complete legal education courses focused on the ethical use of AI. Furthermore, their office must now implement strict protocols for verifying all legal citations. Hyman was also referred to the Attorney Grievance Commission for potential further disciplinary action.
This case marks the first time a Maryland appellate court has formally addressed the issue, but given the increasing trend of AI in legal work, it is unlikely to be the last.
Compare Plans & Pricing
Find the plan that matches your workload and unlock full access to ImaginePro.
| Plan | Price | Highlights |
|---|---|---|
| Standard | $8 / month |
|
| Premium | $20 / month |
|
Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.
View All Pricing Details

