ChatGPT Security Flaw Allows Harmful Image Injection
A critical security vulnerability has been uncovered in ChatGPT, enabling attackers to embed malicious Scalable Vector Graphics SVG and image files directly within shared conversations. This flaw could expose users to sophisticated phishing attempts and harmful content. The issue, officially documented as CVE-2025-43714, is reported to affect the ChatGPT system for versions active through March 30, 2025.
Understanding the Vulnerability
Security researchers discovered that ChatGPT was improperly executing SVG code elements when a chat is reopened or shared via public links, instead of rendering them as plain text within code blocks. This behavior leads to a stored cross site scripting XSS vulnerability on the popular AI platform. A researcher known as zer0dac stated, The ChatGPT system through 2025-03-30 performs inline rendering of SVG documents instead of, for example, rendering them as text inside a code block, which enables HTML injection within most modern graphical web browsers.
The Dangers of Malicious SVGs
The security implications of this flaw are substantial. Attackers can craft deceptive messages embedded within SVG code that appear entirely legitimate to unsuspecting users.
Even more alarmingly, malicious actors could design SVGs with epileptic inducing flashing effects, potentially causing harm to photosensitive individuals. The vulnerability stems from the nature of SVG files. Unlike standard image formats like JPG or PNG, SVGs are XML based vector images that can legitimately include HTML script tags. When these SVGs are rendered inline instead of as inert code, any embedded markup executes within the user’s browser. A report on a similar issue on a different platform explains, SVG files can contain embedded JavaScript code that executes when the image is rendered in a browser. This creates an XSS vulnerability where malicious code can be executed in the context of other users sessions.
OpenAI's Response and User Guidance
OpenAI has reportedly taken initial steps to mitigate the issue by disabling the link sharing feature after the vulnerability was reported. However, a comprehensive fix for the underlying problem is still awaited. Security experts advise users to exercise extreme caution when viewing shared ChatGPT conversations, especially those from unknown or untrusted sources. The vulnerability is particularly concerning because most users inherently trust content originating from ChatGPT and would not typically anticipate visual manipulation or phishing attempts through the platform. As security researcher zer0dac noted, Even without JavaScript execution capabilities, visual and psychological manipulation still constitutes abuse, especially when it can impact someones wellbeing or deceive non technical users.
Securing AI Interfaces A Growing Need
This discovery underscores the increasing importance of securing AI chat interfaces against traditional web vulnerabilities. As these AI tools become more deeply integrated into daily workflows and communication channels, their security becomes paramount.
For those interested in learning more about how hackers probe websites, consider this Free Webinar on Vulnerability Attack Simulation.