AI model Claude Opus turns bugs into exploits for just $2283

In recent weeks, there has been much concern over the impact of Anthropic’s latest AI model, Claude Mythos, and its ability to rapidly uncover previously unidentified vulnerabilities in code belonging to major organizations. Many have worried that giving public access to this model will lead to a massive wave of new exploitation attacks as threat actors use AI models to discover and exploit vulnerabilities faster than cybersecurity personnel can patch them. However, even current publicly available AI is already strongly affecting the cybersecurity world. A recent report studied the current most advanced Anthropic model available to the public, Claude Opus, and its ability to generate exploits.


The experiment involved the use of Claude Opus to develop an exploit targeting the V8 JavaScript engine in Google Chrome, specifically focusing on an outdated version embedded in Discord. Over roughly a week of iterative interaction, the system processed approximately 2.3 billion tokens across more than 1,700 requests, ultimately producing a working exploit at a cost of $2,283 in API usage. The researcher guiding the process had to intervene periodically to keep the model on track, indicating that while the AI can assist significantly, it is not yet fully autonomous in exploit development. The final proof-of-concept successfully executed code on the target system, demonstrating practical exploitation capability rather than a purely theoretical outcome.

From an economic perspective, the experiment suggests that AI-assisted exploit creation can be financially viable. While the total cost of building the exploit exceeded $2,000, comparable vulnerabilities can yield higher returns through bug bounty programs or illicit markets. For example, legitimate programs may offer rewards in the range of several thousand to tens of thousands of dollars per valid exploit, making the investment potentially profitable. The case also underscores systemic risk factors, particularly the prevalence of outdated software components. The targeted Chrome instance used in Discord lagged several versions behind the current release, creating an exploitable attack surface. This reinforces the importance of timely patching and dependency management, as lagging updates can significantly increase exposure when combined with increasingly capable AI-driven tooling. Overall, the report illustrates a shift in the threat landscape: AI is no longer limited to identifying vulnerabilities but is beginning to play a direct role in weaponizing them. While current limitations remain, including the need for human oversight and iterative prompting, the trajectory suggests continued improvement in autonomous exploit generation capabilities.

Share

Related Posts

justin-shen-uQCbc_H-xCY-unsplash
curated-lifestyle-8LbGqfZ8vLY-unsplash
imkara-visual-fmw_iulUfPs-unsplash

Copyright © All Right Reserved

Privacy Policy