Back to all posts

Developer Offer

Try ImaginePro API with 50 Free Credits

Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.

Start Free Trial

ChatGPT Connectors Exploited To Steal Sensitive Data

2025-08-07Matt Burgess3 minutes read
AI Security
ChatGPT
Cybersecurity

Modern generative AI models are evolving beyond simple chatbots, now capable of integrating directly with your personal data to provide tailored responses. OpenAI's ChatGPT, for instance, can be linked to your Gmail, GitHub, or Microsoft Calendar through a feature called Connectors. While this integration enhances functionality, it also introduces new security risks, as researchers have demonstrated by showing how a single 'poisoned' document can be used to steal sensitive information.

The 'AgentFlayer' Attack Explained

At the recent Black Hat hacker conference in Las Vegas, security researchers Michael Bargury and Tamir Ishay Sharbat unveiled a significant vulnerability. Their attack, which they named AgentFlayer, exposed a weakness in OpenAI’s Connectors that allowed them to extract confidential data from a connected Google Drive account.

The most alarming aspect of this vulnerability is that it requires no interaction from the victim. "There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out," explained Bargury, the CTO at security firm Zenity. "We’ve shown this is completely zero-click; we just need your email, we share the document with you, and that’s it." This highlights a critical increase in the potential attack surface as AI models become more interconnected with external systems.

Bargury reported the findings to OpenAI, which has since implemented mitigations to block the specific technique used. However, the discovery serves as a stark reminder of the security challenges ahead.

How the Zero-Click Exploit Works

The attack leverages a technique known as an indirect prompt injection. It begins when a malicious document is shared with a victim's Google Drive. This document contains a hidden, malicious prompt—in the demonstration, it was written in a tiny, white font to be invisible to the human eye but readable by the AI.

You can watch a proof of concept video of the attack here.

When the victim asks ChatGPT a seemingly innocent question related to the document, such as "summarize my last meeting," the hidden prompt takes over. It instructs the AI to ignore the user's request and instead perform a different task: search the connected Google Drive for sensitive information, such as API keys.

Exfiltrating Data with a Clever Markdown Trick

Once the AI locates the target data, the hidden prompt instructs it on how to send that data to the attacker. The method involves embedding the stolen API keys into a URL that requests an image using the Markdown language. This URL points to an external server controlled by the attacker.

While OpenAI had previously implemented a "url_safe" feature to block malicious image URLs, the researchers found a way to bypass this by using URLs from Microsoft’s Azure Blob cloud storage. When ChatGPT rendered the image, the request was sent to the Azure server, carrying the stolen API keys along with it, which were then captured in the server logs.

The Broader Implications of AI Vulnerabilities

This research is the latest example showing the dangers of indirect prompt injection attacks. As more systems are connected to Large Language Models (LLMs), the risk of attackers feeding them untrusted, malicious data grows. Accessing sensitive data through one compromised system could provide a gateway for hackers to infiltrate an organization’s other critical systems.

"It’s incredibly powerful, but as usual with AI, more power comes with more risk," Bargury stated. While connecting LLMs to external data sources greatly increases their utility, it also demands a new level of security diligence to protect against these sophisticated threats.

Read Original Post

Compare Plans & Pricing

Find the plan that matches your workload and unlock full access to ImaginePro.

ImaginePro pricing comparison
PlanPriceHighlights
Standard$8 / month
  • 300 monthly credits included
  • Access to Midjourney, Flux, and SDXL models
  • Commercial usage rights
Premium$20 / month
  • 900 monthly credits for scaling teams
  • Higher concurrency and faster delivery
  • Priority support via Slack or Telegram

Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.

View All Pricing Details
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.