OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks.

October 8, 2025

OpenAI has now disrupted and reported over 40 networks that violated usage policies ranging from covert influence operations to scams and malicious cyber activity and provided a set of case studies illustrating detection, enforcement, and partner collaboration. Included within the report are a small number of activity clusters tied to state-aligned or criminal operators in Russia, North Korea, and China who were using ChatGPT and other AI tools to speed up traditional abuse playbooks. In many cases the models refused clearly malicious requests, and attackers adapted by combining multiple tools or pre-processing inputs to work around safeguards. OpenAI banned accounts when activity violated policies, shared technical indicators with partners when appropriate, and used automated and human review to prevent wider abuse. OpenAI specifically disrupted three clusters using ChatGPT to assist malware development, phishing, and social-media surveillance planning.

OpenAI has permanently banned accounts suspected of links to Chinese government entities that were seeking proposals for social-media monitoring and noted Chinese-language accounts being used to support phishing and malware workflows. It has also acted against Russian-language criminal users who tried to develop malware with model assistance, and North Korea-linked activity that leveraged AI to accelerate influence and cyber workflows. These actions included bans plus information-sharing with industry and, where relevant, external partners.

The broader implication is that defending against AI-enabled abuse requires ongoing industry cooperation, transparency about tactics and mitigations, and investments in detection and enforcement. OpenAI frames continued public reporting, sharing indicators with partners, and improving model-side defenses as its primary mitigations while warning that defenders must anticipate attackers combining multiple AI and non-AI tools to scale old threats. For operational audiences, the practical takeaway is to prioritize indicator-sharing, hardened account monitoring, and layered defenses because AI is changing attackers’ tempo and scale even when it does not create wholly novel techniques.

More from Blackwired

October 1, 2025

Gemini Trifecta Highlights Dangers of Indirect Prompt Injection

Tenable found 3 major flaws in Google Gemini enabling prompt injection, data leaks, and exfiltration—now patched by Google.

Read more
September 24, 2025

AI made crypto scams far more dangerous

Crypto scams surge in 2025, fueled by AI tools, deepfakes, and social engineering—education and vigilance are key defenses.

Read more
September 17, 2025

Fifteen Ransomware Gangs “Retire,” Future Unclear

Scattered Spider claims to retire, but experts suspect a rebrand as attacks linked to the group continue.

Read more