AI and Legal Holds The New Compliance Nightmare
You are your company's in-house legal counsel. It's 3 PM on a Friday, and you've just received notice of impending litigation. Your first thought is to issue a legal hold. But as you see a colleague using an AI tool for work, a new concern arises: "Oh no... what about all the AI stuff?"
Welcome to the new reality where your legal hold obligations have been unexpectedly upgraded by artificial intelligence. This is no longer a theoretical issue. Companies are facing real consequences for failing to preserve AI-related data, leading to increased litigation exposure and a frantic rush to update compliance systems that were never designed for chatbots.
A New Era for Legal Preservation
Remember when a legal hold simply meant instructing employees not to delete emails? Those simpler times are gone. The fundamental duty to preserve electronically stored information (ESI) when litigation is reasonably anticipated still applies, but generative AI has added a significant layer of complexity. Courts are now clarifying that AI-generated content, including both the prompts entered by users and the outputs received, is considered ESI and is subject to traditional preservation rules.
Now, every prompt your team types into ChatGPT, every piece of AI-generated marketing copy, and even casual business-related queries to AI assistants are all potentially discoverable ESI.
The Courts Are Not Waiting
Several recent court decisions have set a clear precedent:
-
In the In re OpenAI, Inc. Copyright Infringement Litigation MDL, a magistrate judge ordered OpenAI to preserve output log data that would otherwise be deleted. This ruling, later upheld by a district judge, signals that courts will prioritize litigation preservation over a company's default data deletion settings.
-
Similarly, in Tremblay v. OpenAI, the court issued a sweeping order for OpenAI to preserve all output log data on a go-forward basis. This case established a crucial point: AI inputs, meaning the prompts themselves, are also discoverable.
-
While not directly about AI, recent rulings on chat spoliation, such as the one concerning Google's chat auto-delete practices, show that judges expect auto-delete functions to be suspended once litigation is anticipated. These cases serve as a strong analogy for how courts will treat AI chat tools.
What Exactly Needs to Be Preserved Now
Your preservation checklist has officially expanded. Here's a breakdown of what's now on your radar:
The Obvious Stuff:
- Every prompt typed into AI tools.
- All AI-generated outputs used for business purposes.
- Metadata identifying who, what, when, and which AI model was used.
The Not-So-Obvious Stuff:
- Failed queries and abandoned outputs.
- Conversations within AI-powered bots in Slack and Teams.
- Quick, informal questions asked to AI about competitors or other business matters.
The "Are You Kidding Me?" Stuff:
- Deleted conversations, which are often recoverable.
- Personal AI accounts used for work-related tasks.
- AI-assisted research that didn't make it into final documents.
Knowing what to preserve is just the first step. The real challenge is implementing an AI-aware legal hold process when IT is still learning to monitor these tools and employees may be using unapproved personal accounts. We'll explore a practical playbook for AI preservation—from compliance frameworks to vendor questions—in a future post.
P.S. - Yes, this blog post was ideated, outlined, and brooded over with the assistance of AI. Yes, we preserved the prompts. Yes, we're practicing what we preach. No, we're not perfect at it yet either.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.