OpenAI’s latest “Disrupting Malicious Uses of AI” report shows that hackers and influence operators are moving toward a more organised use of artificial intelligence (AI). The findings reveal that adversaries are spreading their operations across multiple AI systems, for instance, using ChatGPT for reconnaissance and planning, while relying on other models for execution and automation.
The company noted that attackers haven’t changed their methods but have simply added AI to make their existing tactics faster and more efficient, whether that’s writing malware, refining phishing lures, or managing online scams.
While malicious AI tools such as WormGPT and FraudGPT are already known, new ones are now surfacing. SpamGPT helps cybercriminals bypass email security filters with targeted spam, while MatrixPDF turns ordinary PDF files into malware.
It is worth noting that this latest report comes four months after OpenAI’s earlier publication, which revealed that the company had shut down ten malicious AI operations linked to China, Russia, Iran, and North Korea, where adversaries heavily misused ChatGPT for malicious purposes.
Russian, Korean and Chinese Operators Using AI for Targeted Attacks
In one instance, Russian-speaking actors used ChatGPT to write and refine code for remote-access tools and credential stealers. The model refused direct malicious prompts, but the users extracted functional snippets to later assemble their tools elsewhere. OpenAI found no indication that these interactions gave the hackers capabilities they couldn’t already find through open-source code.
Korean-language operators used ChatGPT to assist with code debugging, credential theft routines, and phishing messages related to cryptocurrency. Each account handled specific technical tasks, such as browser extension conversion or VPN configuration, showing structured workflows similar to corporate development teams.
Chinese-language operators went further, asking the model to generate phishing content in multiple languages and help with malware debugging. Their activity coincided with campaigns reported by Volexity and Proofpoint that targeted academia, think tanks, and the semiconductor sector. According to OpenAI, these users aimed to increase efficiency rather than develop new attack methods.
Organised Crime and Scam Operations
The report also shows how AI is being exploited in established scam networks. Operations traced to Cambodia, Myanmar, and Nigeria used ChatGPT to translate messages, write fake investment pitches, and manage day-to-day logistics within “large-scale scam centers.”
In one example, scammers posed as financial advisors running fake trading groups on WhatsApp. They generated all the chat content with AI to make conversations seem authentic and convincing. Another network used ChatGPT to design fake online investment firm profiles, complete with fabricated employee biographies.
Interestingly, OpenAI found that ChatGPT is being used to detect scams about three times more often than it is used to create them, as people turn to the model to verify suspicious messages.
State-Linked Abuses of AI
OpenAI also reported accounts linked to Chinese government entities using ChatGPT to draft proposals for large-scale social media monitoring systems. One user requested help outlining a “High-Risk Uyghur-Related Inflow Warning Model,” which aimed to track individuals through travel and police data.
Other users focused on profiling and information gathering, asking ChatGPT to summarise posts by activists or identify petition organisers. The company said its models returned only public data, but the intent behind these requests raised concerns about surveillance-related use.
Influence Operations in Russia and China
The Russia-origin “Stop News” operation, previously disrupted by OpenAI and other tech firms, resurfaced using AI to write video scripts and promotional text. These were translated and turned into short videos shared on YouTube and TikTok, praising Russia and criticising Western nations. Although the campaign produced a steady stream of content, engagement remained low.
A separate operation, “Nine emdash Line,” appeared linked to China and focused on regional disputes in the South China Sea. The group generated English and Cantonese social media posts criticising the Philippines, Vietnam, and Hong Kong democracy activists. They also sought advice on boosting engagement through TikTok challenges and hashtags. Most of their posts gained little attention before the accounts were suspended.
Expert Perspective
“For the average reader, this might sound like a not particularly surprising trend. AI is being used in everything from generating cool chilli recipes to college essays, so why wouldn’t it be writing the code and other exploits needed for cyberattacks?” said Evan Powell, CEO of DeepTempo.
“What most may not realise is that cybersecurity defences are uniquely vulnerable to AI-powered attacks. Today’s defences are almost entirely based on static rules: if you see A and B while C, then that’s an attack and take action. Today’s AI attackers train their systems to avoid these fixed pattern detections, which allows them to slip into enterprises and government systems at an increasing rate,” he explained.
He added that attackers are now using AI not only to build tools but also to plan campaigns. “These types of campaigns, which combine detailed research, customised attacks, and repeated attempts to gain access, used to require expertise, patience, and a large team. Today, AI boosts the productivity of attackers, enabling even one person to carry out operations that once required a well-funded organisation or nation-state. The implications are terrifying.”
RELATED TOPICS
- ShadowLeak Exploit Exposed Gmail Data via ChatGPT Agent
- Savvy Seahorse Using Fake ChatGPT in DNS Investment Scam
- Leaked ChatGPT Chats: Users Treat AI as Therapist, Lawyer, Confidant
- LegalPwn Tricks GenAI Tools Into Misclassifying Malware as Safe Code
- Malicious AI Models Are Behind a New Wave of Cybercrime, Cisco Talos