Back to all posts

The Legal Risks of Sharing Secrets with AI

2025-09-01Natalie Musumeci3 minutes read
AI
Privacy
Law

A close-up of someone's fingers typing on a laptop.

Artificial intelligence chatbots like OpenAI's ChatGPT are increasingly being used as digital confidants and even stand-in therapists. While these tools are powerful, sharing your deepest secrets with them could expose you to serious legal risks.

According to legal experts, these conversations are not protected in the same way as discussions with a doctor, lawyer, or therapist. This means sensitive chat records could potentially be subpoenaed in a lawsuit or government investigation.

Your AI Is Not Your Lawyer or Therapist

Two lawyers specializing in AI-related legal issues emphasized that users should exercise caution. Conversations with AI lack the legal privilege that protects confidentiality in professional relationships.

"People are just pouring their hearts out in these chats, and I think they need to be cautious," said Juan Perla, a partner at Curtis, Mallet-Prevost, Colt & Mosle LLP. "Right now, there really isn't anything that would protect them if a court really wanted to get to the chat for some reason related to a litigation."

Even OpenAI CEO Sam Altman acknowledged this problem, noting that while people use ChatGPT like a therapist, there is no established legal privilege for it. "So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up," Altman stated.

When Your Chat History Becomes Evidence

The lack of legal protection means your chat logs could become discoverable evidence in various legal situations. Perla noted that messages related to a workplace dispute, divorce, custody case, or potential criminal activity could be subject to discovery.

"If you're putting something into ChatGPT because it's something that you would normally only share with your medical doctor, with your therapist, or with a lawyer, that should already tell you, 'I should not be putting this information in here,'" Perla advised. He added that the safest approach is to avoid having these sensitive conversations with AI chatbots altogether.

How to Protect Your Private Information

James Gatto, a partner at Sheppard Mullin who co-leads the firm's AI team, stressed the importance of understanding how different AI platforms handle your data. He advised users to carefully review the terms of service and data retention policies before sharing personal information.

Some key takeaways for users include:

  • Check the Policies: Understand how an AI tool stores, uses, and deletes your data.
  • Consider Paid Versions: Some paid platforms may offer more robust privacy features, like the automatic deletion of user inputs, which free versions typically do not.
  • Weigh the Risks: Before using an AI for sensitive topics, assess the potential consequences if that conversation became public. "The important takeaway is you need to understand the pros and cons of using these tools, you need to understand the legal and personal risk," Gatto said.

Perla concluded with a critical question every user should ask themselves before typing a sensitive prompt: "Am I comfortable with this information ever landing in the hands of somebody else?"

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.