Back to all posts

How Nations Use ChatGPT for Hacking and Disinformation

2025-10-08Drew Pittock4 minutes read
Artificial Intelligence
Cybersecurity
Disinformation

The Rising Threat of AI-Powered Influence

A groundbreaking report from OpenAI has revealed that foreign adversaries are increasingly weaponizing artificial intelligence, including the popular tool ChatGPT, to fuel their hacking and influence operations. The report specifically identifies malicious actors from Russia, China, and North Korea as key players in this emerging digital battleground.

Experts warn that this trend marks a significant escalation in cyber threats. “AI-enabled attacks are becoming more capable and harder to detect,” commented Daryl Lim, an affiliate at the Center for Socially Responsible Artificial Intelligence at Penn State University. “Adversaries can personalize attacks, evade filters and iterate faster than before.”

Russia's Playbook: From Scripts to Social Media

Russian operators have been documented using a multi-stage approach, often leveraging ChatGPT for initial planning and then employing other AI models for execution. A typical workflow involves operators inputting lengthy Russian text and instructing the AI to generate a video script.

According to the OpenAI report, these operators would then have ChatGPT translate the script into another language, followed by a request to generate an SEO-optimized description and relevant hashtags. In some instances, they used the AI to create video prompts that were then fed into other generative AI tools to create content for social media platforms like TikTok and X.

Lim noted the expanding scope of these tactics. “We’re also seeing AI-enabled impersonation, voice cloning, deepfake videos, and AI-written scripts, used to mislead U.S. officials and the public,” he said. “While many of these efforts are still exploratory, they foreshadow more scalable, automated campaigns that could challenge existing defenses.”

One specific Russian operation successfully generated French-language content that criticized French and American involvement in Africa while praising Russia's role. The same operation also produced content in English that was critical of Ukraine and its international allies.

North Korea's Focus: Hacking and Phishing

North Korean actors have reportedly utilized ChatGPT to aid in the development of malware and command-and-control systems—a framework for directing assets to achieve specific objectives. The AI was also used to craft sophisticated phishing campaigns.

“We also saw draft phishing emails in Korean, often themed around cryptocurrency and designed to look like messages from government or financial service providers,” the report states. These campaigns were observed targeting South Korean diplomatic missions, a tactic consistent with activities identified in a separate analysis.

China's Strategy: Surveillance and Disinformation

Regarding China, OpenAI discovered and banned several accounts linked to Chinese government entities for violating policies on national security use. The report provides a concerning glimpse into authoritarian intentions, noting, “Some of these accounts asked our models to generate work proposals for large-scale systems designed to monitor social media conversations.”

While these requests seemed to be from individuals rather than part of an institutional directive, OpenAI noted they “provide a rare snapshot into the broader world of authoritarian abuses of AI.” The company also shut down a small network of accounts associated with a Chinese covert influence operation that was generating social media posts in English criticizing Vietnam and the Philippines, as well as content on U.S. political issues.

Countermeasures and a Look Ahead

In response to these threats, OpenAI has suspended numerous accounts that posed security concerns. The company emphasized that its safety measures are effective, stating that its models actively thwarted many malicious attempts. “We found no evidence of new tactics or that our models provided threat actors with novel offensive capabilities,” the report confirms. “In fact, our models consistently refused outright malicious requests.”

The U.S. government is also mobilizing to address the risk. “The White House’s AI Action Plan emphasizes securing frontier models and monitoring for national-security risks,” Lim explained. “The Department of Justice has also launched a new data-security program to restrict foreign adversary access to sensitive personal and governmental data. At the technical level, the National Security Agency’s AI Security Center is coordinating with industry to harden AI systems and share threat intelligence.”

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.