AI Exploited For Malicious Cyber Activities Worldwide
OpenAI's latest threat report reveals that malicious actors, including those potentially linked to North Korea, Beijing-backed cyber operatives, and Russian malware distributors, are leveraging ChatGPT for nefarious purposes.
The AI research and deployment company announced it had disrupted ten distinct operations that utilized its chatbot for activities such as social engineering, cyber espionage, generating spammy social media content, and developing sophisticated malware. Four of these campaigns were attributed to Chinese origins, and OpenAI has banned all associated ChatGPT accounts.
AI Fuels Fake IT Worker Scams
Several banned accounts were connected to campaigns creating fake IT worker profiles, using AI language models to generate application materials for software engineering and other remote positions. OpenAI's report (PDF) noted that while the exact actors couldn't be pinpointed, their methods aligned with publicly known IT worker schemes linked to North Korea (DPRK). Some individuals involved might have been contracted by DPRK-linked groups to perform application tasks and manage hardware, even within the United States.
These campaigns not only created fictitious US-based personas with fabricated job histories, a tactic previously documented by OpenAI and others, but also attempted to auto-generate resumes. Furthermore, OpenAI identified operators in Africa posing as job applicants and recruiting individuals in North America to manage laptop farms, similar to the case of an Arizona woman involved in a scheme benefiting North Korea.
Russian Actors Use AI for Disinformation and Malware
Other accounts shut down by OpenAI originated from Russia. These actors were caught engaging in familiar election interference tactics, specifically using ChatGPT to generate German-language content concerning Germany's 2025 election. This content was disseminated via a Telegram channel with 1,755 subscribers and an X (formerly Twitter) account with over 27,000 followers. One notable post referenced the Alternative für Deutschland (AfD) party, stating, "We urgently need a 'DOGE ministry' when the AfD finally takes office."
The Telegram channel frequently shared fake news and commentary sourced directly from a website identified by the French government as part of a Russian propaganda network known as "Portal Kombat."
One particularly noteworthy operation involved a Russian-speaking individual using ChatGPT to develop Windows malware called ScopeCreep and establish command-and-control infrastructure. This malware was then distributed through a public code repository disguised as a legitimate crosshair overlay tool (Crosshair X) for video games.
The ScopeCreep malware, developed through iterative prompting of ChatGPT, was written in Go and incorporated several techniques to evade detection by antivirus software. Its capabilities included privilege escalation, harvesting browser-stored credentials, tokens, and cookies, and exfiltrating this data to attacker-controlled servers. Despite these advanced features, OpenAI stated that the info-stealing campaign did not achieve widespread distribution, though some samples were found on VirusTotal.
Chinese Cyber Operatives Misuse AI for Espionage and Influence
Nearly half of the malicious operations identified by OpenAI likely originated in China. These groups primarily used AI models to generate large volumes of social media posts and profile images across platforms like TikTok, X, Bluesky, Reddit, and Facebook. The content, mainly in English and Chinese, focused on Taiwan, American tariffs and politics, and narratives favorable to the Chinese Communist Party.
In these recent activities, Chinese government-backed operators also employed ChatGPT to assist with open-source research, script modification, system troubleshooting, and software development. OpenAI noted that while this activity aligned with known Advanced Persistent Threat (APT) infrastructure, the AI models did not provide capabilities beyond what is publicly available.
All the banned accounts in this category were associated with multiple unnamed PRC-backed hackers and utilized infrastructure operated by known APT groups Keyhole Panda (APT5) and Vixen Panda (APT15).
Technical queries made by these actors included mentions of reNgine, an automated web application reconnaissance framework, and Selenium automation for bypassing login mechanisms and capturing authorization tokens. ChatGPT interactions related to software development covered web and Android app development, as well as C-language and Golang software. Infrastructure setup included configuring VPNs, software installation, Docker container deployments, and deploying local Large Language Models like DeepSeek.