Back to all posts

OpenAI Cleared In ChatGPT Defamation Lawsuit

2025-05-29Debra Cassens Weiss3 minutes read
AI Law
Defamation
ChatGPT

OpenAI Defamation Lawsuit Dismissed

ChatGPT

A lawsuit filed by a radio host who alleged that a ChatGPT hallucination defamed him has been tossed by a judge who found no negligence or actual malice by OpenAI, the creator of the artificial intelligence platform. (Illustration by Sara Wadford/ABA Journal/Shutterstock)

A lawsuit initiated by a radio host, claiming defamation due to a ChatGPT-generated falsehood, has been dismissed. A judge determined that OpenAI, the developer of the AI platform, demonstrated neither negligence nor actual malice.

The Allegations: ChatGPT's False Embezzlement Claims

Judge Tracie H. Cason of Gwinnett County, Georgia, delivered the ruling on May 19 against Mark Walters, a nationally syndicated radio host known as “the loudest voice in America fighting for gun rights.”

The core of the lawsuit revolved around an incident where ChatGPT, as previously detailed in legal news, incorrectly stated that Walters was “defrauding and embezzling funds” from the Second Amendment Foundation. This gun rights nonprofit was the subject of a journalist's inquiry in 2023, who repeatedly asked the chatbot to summarize a suit filed by the organization.

Journalist's Role and Non-Publication

The journalist involved had encountered disclaimers from ChatGPT, warning that some information it provides might be incorrect. Initially, when asked to open a link to read and describe the suit, ChatGPT responded that it could not access the link. The erroneous information regarding Walters was generated after subsequent queries.

Importantly, the journalist managed to verify the falsity of these claims within approximately an hour and a half and, as a result, never published the defamatory information.

Judge Cason's Ruling: No Negligence or Actual Malice

An expert witness for OpenAI testified that the output from ChatGPT “contained clear warnings, contradictions and other red flags that it was not factual.”

Judge Cason stated that no reasonable reader would have interpreted the ChatGPT output as conveying actual facts, meaning it was not defamatory as a matter of law. She further concluded that Walters failed to establish either negligence or actual malice on OpenAI's part.

Key Defense Points: Warnings and User Responsibility

During oral arguments, Walters’ lawyer contended that “a prudent man would take care not to unleash a system on the public that makes up random false statements about others.” However, Judge Cason noted that a publisher is not considered negligent merely because it acknowledges the potential for making mistakes.

OpenAI's Stance on AI Accuracy

Cason referenced an opinion from OpenAI’s expert, highlighting that the company is an industry leader in its efforts to reduce and prevent such errors. She also acknowledged that OpenAI has “taken extensive steps” to alert users to potential inaccuracies in the information provided by its AI.

Public Figure Status and Lack of Damages

The judge also pointed out that even if OpenAI had acted negligently, it would likely be protected because Walters is a public figure. In such cases, proving actual malice – that OpenAI knew the statement was false or acted with reckless disregard for the truth – is required, which Walters did not do.

Finally, Cason concluded that Walters could not recover damages because he did not incur any actual harm from the unpublished statements. Furthermore, he was not eligible for punitive damages because he had not requested a correction or retraction from OpenAI.

Several legal publications, including Law.com and Reuters, have covered this significant decision.

John Monroe, Walters’ attorney, informed Law.com via email that he and his client “are reviewing the order and considering our options,” suggesting the legal discourse on this matter may continue.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.