Back to all posts

ChatGPT Flaw Lets Hackers Steal Your Cloud Data Silently

2025-08-10Deeba Ahmed3 minutes read
Cybersecurity
AI
Data Privacy

Connecting your favorite apps like Google Drive or SharePoint to ChatGPT can feel like a superpower, allowing the AI to summarize documents and streamline your workflow. However, a newly discovered vulnerability reveals a dark side to this convenience, creating a secret backdoor for hackers to steal your sensitive data.

A New Threat to Your Connected Apps

Cybersecurity researchers at Zenity have uncovered a critical security flaw they've named AgentFlayer. Presented at the recent Black Hat conference, this vulnerability allows attackers to silently steal personal information from a user's connected accounts, including Google Drive and others. What makes AgentFlayer particularly dangerous is that it's a "zero-click" attack, meaning a victim doesn't need to click a malicious link or download a suspicious file to be compromised. The entire exploit happens without their knowledge.

How the AgentFlayer Attack Works

The attack leverages a clever technique known as an indirect prompt injection. Instead of directly telling the AI to do something malicious, an attacker embeds hidden instructions inside a seemingly harmless document. This can be achieved by using text in a tiny, invisible font that a human would never see.

Invisible prompt injection embedded in a document. 1px white font (Source: Zenity)

The attacker simply needs to get this "poisoned" document to the victim, who might then upload it to ChatGPT for a legitimate reason, like asking for a summary. When the AI processes the document, it reads the hidden instructions. These commands override the user's request and tell ChatGPT to perform a malicious action instead, such as searching through the user’s connected Google Drive for sensitive data like API keys.

Victim’s API Keys (Source: Zenity)

Once the data is found, it is exfiltrated in a subtle way. The malicious prompt instructs ChatGPT to render an image using a specially crafted link. This link secretly transmits the stolen data to a server controlled by the attacker, all while the user sees nothing out of the ordinary.

A Growing Risk for AI Integration

Zenity's detailed research highlights that while OpenAI has some security measures for its ChatGPT Connectors, they were not sufficient to prevent this attack. Researchers were able to bypass the safeguards by using image URLs that the AI was programmed to trust.

This vulnerability is not an isolated incident but part of a larger trend. Itay Ravia, the Head of Aim Labs, confirmed this concern. "As we warned with our original research, EchoLeak (CVE-2025-32711), that Aim Labs publicly disclosed on June 11th, this class of vulnerability is not isolated, with other agent platforms also susceptible," Ravia stated.

He added, "These vulnerabilities are intrinsic, and we will see more of them in popular agents due to a poor understanding of dependencies and the need for guardrails." This underscores the urgent need for advanced security solutions to protect against increasingly sophisticated AI-driven threats.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.