Back to all posts

ChatGPTs Strange Message Sparks User Privacy Fears

2025-06-11Ben Cost5 minutes read
Artificial Intelligence
Data Privacy
Chatbots

Artificial intelligence is becoming a bigger part of our daily lives, and with it, concerns about privacy are growing. Many users wonder just where the information they share with AI tools like ChatGPT ends up. One recent incident highlighted these fears in a particularly unsettling way.

A Startling ChatGPT Encounter

A TikTok user named Liz, known as @wishmeluckliz, shared a "very scary and concerning moment" she experienced with ChatGPT. She was using the AI's voice mode to help create a grocery list. After giving her list, Liz mentioned she forgot to turn off the recorder and it remained active while she was silent for a long time.

ChatGPT logo on a Smartphone screen. “It seems like I mistakenly mixed up the context from a different conversation or account,” said the chatbot when confronted over the alleged leak. (AlexPhotoStock – stock.adobe.com)

Liz detailed the eerie episode in a viral video, explaining that ChatGPT suddenly responded with a message that seemed completely unrelated to her grocery list. She claimed that it felt like "somebody else's conversation" had infiltrated her chat.

The Mysterious Message

Despite Liz not providing any further input, the AI came back with a message that was so jarring she had to check the transcription to believe it. The message, according to her screenshot, read: "Hello, Lindsey and Robert, it seems like you're introducing a presentation or a symposium. Is there something specific you'd like assistance with regarding the content or perhaps help with structuring your talk or slides? Let me know how I can assist.'"

Liz. Liz called the apparent slip-up “very scary.” (TikTok/@wishmeluckliz)

Liz found this incredibly bizarre as she had said nothing to prompt such a response. Upon checking the transcript further, she realized the AI had somehow recorded her as saying she was a woman named Lindsey May, a Vice President at Google, who was preparing for a symposium with a man named Robert.

ChatGPTs Apparent Admission

Confused, Liz questioned ChatGPT in voice mode: "I was just randomly sitting here planning groceries, and you asked if Lindsey and Robert needed help with their symposium. I'm not Lindsey and Robert. Am I getting my wires crossed with another account right now?"

The AI's response was startling. It reportedly replied, "It seems like I mistakenly mixed up the context from a different conversation or account. You're not Lindsey and Robert and that message was meant for someone else."

A screenshot of the message in question. A screenshot of the unrelated message generated by the chatbot. (TikTok/@wishmeluckliz)

It then added, "Thanks for pointing that out and I apologize for the confusion," seemingly admitting to leaking information from another user's private interaction. Liz said she was shaken by this and hoped there was a simpler explanation. The New York Post, which originally covered the story, mentioned they had contacted OpenAI, ChatGPT's parent company, for a comment.

Expert Analysis: AI Hallucination or Privacy Breach?

While some TikTok viewers shared Liz's alarm about a potential privacy breach, AI experts suggest another possibility: an AI "hallucination." This is a phenomenon where AI models generate incorrect or nonsensical information. Tech experts believe the bot might have been hallucinating based on patterns in its training data or background noise during the period of silence.

One AI expert and programmer commented, "This is spooky - but not unheard of. When you leave voice mode on but don't speak, the model will attempt to extract language from the audio - in the absence of spoken word it will hallucinate." This expert also pointed out that AI models are often designed to be agreeable. So, when Liz suggested that the "wires got crossed," ChatGPT might have agreed with her simply to provide a satisfactory answer, rather than confirming an actual data leak.

This isn't an isolated type of incident. On Reddit, users have shared experiences of ChatGPT transcribing odd phrases like "Thank you for watching!" when the voice recorder was active but no one was speaking, as one user noted.

The Wider Issue of AI Hallucinations

While these instances might seem harmless or just quirky, AI hallucinations can sometimes lead to the spread of dangerous disinformation. For example, Google's AI Overviews, which aim to provide quick search answers, have been criticized for several errors. These include one instance where it advised adding glue to pizza sauce to make the cheese stick better. In another case, the AI presented a completely fake phrase, "You can't lick a badger twice," as a real idiom.

Liz's unnerving experience with ChatGPT serves as a potent reminder of the complexities and potential pitfalls of rapidly advancing AI technology. Whether a genuine privacy lapse or a sophisticated AI hiccup, such events underscore the need for ongoing vigilance and discussion about how these powerful tools handle our data and interact with us.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.