ChatGPT Legal Blunder Costs Attorney Her Job and More
The High Cost of Unchecked AI
A cautionary tale is unfolding in the Chicago legal community, highlighting the significant risks of relying on artificial intelligence without proper verification. An attorney, hired to defend the Chicago Housing Authority (CHA), admitted to a critical error: citing a completely fictitious court case in a lawsuit concerning the alleged lead poisoning of two children. The source of this phantom case was ChatGPT, a popular AI tool, and the mistake stemmed from a failure to check its output.
A Pattern of AI-Generated Errors
Unfortunately, this wasn't a one-time lapse in judgment. Court records reveal that the attorney, Danielle Malaty, had a history of improperly using AI. In a separate case, Calderon v. Dynamic Manufacturing, Inc., Malaty submitted filings that were riddled with AI "hallucinations." Combined, a motion to dismiss and a subsequent reply contained a staggering 12 fake case citations. That case involved a woman's claim of a hostile work environment under the Illinois Human Rights Act.
In a court filing, Malaty's counsel expressed her deep remorse. "Ms. Malaty apologizes to the Court, the Court’s staff and Ms. Calderon’s counsel," the filing stated, adding that she is "immensely remorseful (to say nothing of embarrassed) for the burden that she has imposed." Malaty argued that she did not act in "bad faith" and that the errors were not intentional.
Sanctions and Professional Fallout
Despite her apologies, Cook County Circuit Judge William Sullivan sanctioned Malaty on July 16 with a $10 fine for her improper use of AI. Furthermore, she was required to pay the plaintiff's counsel $1,000 to compensate for the time they spent addressing the fabricated citations.
The professional consequences were even more severe. Malaty was terminated from her role as a partner at the law firm Goldberg Segalla. The firm had an existing policy that banned the use of AI, which Malaty had violated. Since the incident, Malaty has started her own practice and has reportedly undertaken about seven hours of training on AI and ethical issues.
Her former firm, Goldberg Segalla, has since taken firm-wide measures to re-educate its attorneys on AI policies and has established new preventative protocols.
An Ongoing Legal Battle and A Touch of Irony
The story is not over. Attorneys for the plaintiffs in the CHA lawsuit are now using the chancery case as further evidence in their own request for sanctions against Malaty. In a separate but related development in the CHA case, a jury had previously decided in January that the agency must pay over $24 million in damages, a ruling the CHA continues to contest.
In a twist of irony, a now-deleted LinkedIn post reveals that about ten months ago, Malaty wrote a blog post for her former firm titled “Artificial Intelligence in the Legal Profession: Ethical Considerations.” The CHA, which has been billed nearly $390,000 by Goldberg Segalla, has been assured by the firm that it will not be billed for any time or expenses related to the AI issue.