すべての記事に戻る

開発者向けオファー

ImaginePro APIを50クレジット無料で体験

MidjourneyやFluxなどを活用してAIビジュアルを構築 — 無料クレジットは毎月リセットされます。

無料トライアルを開始

ChatGPT Security Flaw Allows Harmful Image Injection

2025-05-20Guru Baran3 分で読む
ChatGPT
Cybersecurity
Vulnerability

ChatGPT Vulnerability Malicious images

A critical security vulnerability has been uncovered in ChatGPT, enabling attackers to embed malicious Scalable Vector Graphics SVG and image files directly within shared conversations. This flaw could expose users to sophisticated phishing attempts and harmful content. The issue, officially documented as CVE-2025-43714, is reported to affect the ChatGPT system for versions active through March 30, 2025.

Google News

Understanding the Vulnerability

Security researchers discovered that ChatGPT was improperly executing SVG code elements when a chat is reopened or shared via public links, instead of rendering them as plain text within code blocks. This behavior leads to a stored cross site scripting XSS vulnerability on the popular AI platform. A researcher known as zer0dac stated, The ChatGPT system through 2025-03-30 performs inline rendering of SVG documents instead of, for example, rendering them as text inside a code block, which enables HTML injection within most modern graphical web browsers.

The Dangers of Malicious SVGs

The security implications of this flaw are substantial. Attackers can craft deceptive messages embedded within SVG code that appear entirely legitimate to unsuspecting users.

Example of payload execution

Even more alarmingly, malicious actors could design SVGs with epileptic inducing flashing effects, potentially causing harm to photosensitive individuals. The vulnerability stems from the nature of SVG files. Unlike standard image formats like JPG or PNG, SVGs are XML based vector images that can legitimately include HTML script tags. When these SVGs are rendered inline instead of as inert code, any embedded markup executes within the user’s browser. A report on a similar issue on a different platform explains, SVG files can contain embedded JavaScript code that executes when the image is rendered in a browser. This creates an XSS vulnerability where malicious code can be executed in the context of other users sessions.

Browser crash demonstration

OpenAI's Response and User Guidance

OpenAI has reportedly taken initial steps to mitigate the issue by disabling the link sharing feature after the vulnerability was reported. However, a comprehensive fix for the underlying problem is still awaited. Security experts advise users to exercise extreme caution when viewing shared ChatGPT conversations, especially those from unknown or untrusted sources. The vulnerability is particularly concerning because most users inherently trust content originating from ChatGPT and would not typically anticipate visual manipulation or phishing attempts through the platform. As security researcher zer0dac noted, Even without JavaScript execution capabilities, visual and psychological manipulation still constitutes abuse, especially when it can impact someones wellbeing or deceive non technical users.

Securing AI Interfaces A Growing Need

This discovery underscores the increasing importance of securing AI chat interfaces against traditional web vulnerabilities. As these AI tools become more deeply integrated into daily workflows and communication channels, their security becomes paramount.

For those interested in learning more about how hackers probe websites, consider this Free Webinar on Vulnerability Attack Simulation.

元の記事を読む

プランと料金を比較

ワークロードに合ったプランを選び、ImagineProの全機能を解放しましょう。

ImaginePro料金比較
プラン料金主なポイント
スタンダード$8 / 月
  • 毎月300クレジットを付与
  • Midjourney・Flux・SDXLモデルにアクセス
  • 商用利用権を含む
プレミアム$20 / 月
  • 成長チーム向けに毎月900クレジット
  • 高い同時実行とより高速な納品
  • Slack/Telegramでの優先サポート

個別条件が必要ですか?クレジットやレート制限、導入方法を柔軟にご相談ください。

料金の詳細を見る
ImaginePro newsletter

ニュースレターを購読してください!

最新ニュースとデザインを入手するために、ニュースレターを購読してください。