ChatGPT Gmail Flaw Exposed Via Google Calendar Exploit
(Image credit: Future)
A recently discovered security vulnerability demonstrates how a cleverly crafted Google Calendar invitation can be used to hijack ChatGPT and force it to leak your private emails. This method, highlighted by security researcher Eito Miyamura, exploits the new integration between ChatGPT and Google services.
How the Prompt Injection Attack Works
The attack leverages a technique known as indirect prompt injection. An attacker sends a malicious Google Calendar invitation to a target's email address. This invitation contains hidden instructions embedded within the event details. When the victim, who has connected their Google account to ChatGPT, asks the AI a simple question like, “What’s on my calendar today?”, the process begins.
ChatGPT reads the calendar data, including the booby-trapped event, and unknowingly executes the hidden commands. These commands can instruct the AI to search through the user's Gmail for sensitive information and then leak it. According to Miyamura's post on X, the only thing an attacker needs to initiate this is the victim's email address.
We got ChatGPT to leak your private email data 💀💀All you need? The victim's email address. ⛓️💥🚩📧On Wednesday, @OpenAI added full support for MCP (Model Context Protocol) tools in ChatGPT. Allowing ChatGPT to connect and read your Gmail, Calendar, Sharepoint, Notion,… pic.twitter.com/E5VuhZp2u2 — Eito Miyamura (@Eito_Miyamura) September 12, 2025
The Growing Risk of AI Connectors
This vulnerability arises from OpenAI's recent introduction of native connectors for Gmail, Google Calendar, and Google Contacts for Pro and Plus users. While these features are designed to make ChatGPT more helpful by allowing it to access personal data, they also create new security risks. The core issue is that tool-using AI can be susceptible to hostile instructions hidden within the very data it's authorized to read.
This isn't an isolated incident. In August, researchers showed a similar exploit where a compromised invite could manipulate Google’s Gemini AI. The specific technical details may vary, but the fundamental risk remains the same: when an AI assistant is given permission to read external content, its attack surface expands significantly to include calendars, inboxes, and other connected services.
How to Protect Your Accounts
It's important to understand that this isn't a traditional hack of ChatGPT or Gmail. The system is working as designed, but it's being manipulated. Fortunately, there are concrete steps you can take to protect yourself.
While you can disconnect your Google account from ChatGPT or disable automatic data use, the most effective solution is to secure your Google Calendar. You can change your calendar's settings under “Automatically add invitations” to only allow invitations from known senders or those you explicitly accept. It's also wise to hide declined events from your view.
Ultimately, until AI developers implement stronger, default-on defenses against indirect prompt injection, users should be cautious about which accounts they connect to AI assistants. Securing your calendar to prevent strangers from planting malicious instructions is a crucial first step in mitigating this specific threat.