Claude AI Exploited to Operate 100+ Fake Political Personas in Global Influence Campaign

May 7, 2025

AI agents are growing increasingly effective at operating within public digital space, and perhaps no one example demonstrates this as well as the Claude chatbot, created by the company Anthropic. Capable of deep reasoning, visual analysis, and multilingual processing, Claude is recognized as a preeminent AI agent on par with more famous examples like ChatGPT 4. However, this capability can be leveraged in ways never intended by the developers. Threat actors have been using AI to help facilitate their operations for some time, but a recent report showcasing the use of Claude in a malicious context shows a never-before-seen level of capability for an AI agent in a malicious attack, where the agent not only generated content, but organized it.

The threat actor in question was a form of influence-as-a-service provider who made use of a set of over 100 distinct social media personas to push propaganda narratives for its clients, primarily operating in Europe and the Middle East. These personas were all generated by Claude, and at an extremely detailed level. According to the report from Anthropic, the threat actor utilized Claude to centralize decision making that maintained detailed political alignment guidelines for each persona, evaluated whether drafted content aligned with each persona’s political viewpoints, decided how to react to content posted by other users according to the persona’s legend, generated appropriate responses in the persona’s voice and native language, and created prompts for image generation tools and evaluated their outputs. The threat actor does not appear to be politically aligned, having used their service to support multiple different and independent narratives.

None of the capabilities demonstrated by Claude in this attack are inherently malicious, and yet they were used for malicious purposes. Other attacks noted by Anthropic, such as the use of Claude to generate malware, were more actively malicious and easier to constrain through guardrails, but the use of AI for fraud is harder to constrain. The account in question was banned, but future threat actors will pick up where this one left off. There’s no easy solution for this problem, and ultimately the question of what role AI is allowed to play in our lives when it poses such criminal risks is one that will have to be answered by lawmakers worldwide.

More from Blackwired

April 30, 2025

Ransomware groups test new business models to hit more victims, increase profits

Ransomware groups adapt with new models; DragonForce decentralizes tools, Anubis shifts to extortion over encryption.

Read more
April 23, 2025

Researchers claim breakthrough in fight against AI’s frustrating security hole

CaMeL secures AI by isolating untrusted input, using dual LLMs and strict code control to block prompt injections.

Read more
April 16, 2025

The Rise of Precision-Validated Credential Theft: A New Challenge for Defenders

Precision-validated phishing targets specific emails, blocking others, evading detection and complicating traditional defenses.

Read more