SquareX: Browser AI Agents Are The Weakest Link

July 2, 2025

It is a truism among security personnel that the weakest link in any cybersecurity strategy is always the human element. However, a report from browser cybersecurity firm SquareX has put forward the shocking statement that a still weaker link has come into play that will affect many organizations: browser AI agents. A browser AI agent, such as OpenAI’s Operator, Claude’s Computer Use, and the open-source framework Browser Use, are AI agents intended to operate online and automate multi-step tasks for users, such as scheduling emails, managing inboxes, and booking flights. These agents are intended to complete tasks based on the user’s instructions as quickly and accurately as possible.

What makes browser agents particularly dangerous from a security perspective is that the agents, since they are performing actions on the user’s behalf, carry the exact same privilege level as that user, with access to the same privileged data and enterprise apps that the user would have. In most cases, there is no way to distinguish between the activity of a particular user or a browser agent they might be employing. Further, these agents are not built with security in mind. Unlike employees who can be trained and made aware of potential dangers, AI agents do not typically include security guardrails to train them against cyberattacks. As a result, common attack patterns that today would have a low chance of tripping up a trained human can be easily employed against browser agents.

SquareX provides two clear examples of how a browser AI agent can be preyed upon. In the first example, a browser agent tasked to login to Salesforce and complete a task was directed into a phishing webpage by search engine malvertising, where it proceeded to enter its login credentials, handing them over freely to the threat actor. In the second case, a browser agent tasked to perform research and save the results via filesharing falls victim to an OAuth attack, again due to malvertising, and ends up handing over access to a Google Drive account. These demonstrate clear indications that browser AI agents are not trained to recognize common web-based attacks. Due to the lack of well labeled public datasets and the everchanging threat landscape, it is unlikely that AI agent developers could make their agents sufficiently safe against these attacks. Security personnel working in enterprises where sensitive data is in play should seriously consider the restriction or even the total ban of the use of such agents, given that they put vulnerable data at risk.

More from Blackwired

June 25, 2025

US Homeland Security warns of escalating Iranian cyberattack risks

US-Iran conflict escalates; DHS warns of rising cyber, terror threats from Iran, allies, and hacktivists targeting US infrastructure.

Read more
June 18, 2025

CISA Issues Comprehensive Guide to Safeguard Network Edge Devices

New global guidance urges stronger edge device security to counter rising zero-day threats—focus on logging, MFA, and hardening.

Read more
June 11, 2025

Hacktivist Groups Transition to Ransomware-as-a-Service Operations

Hacktivist groups shift to ransomware as motives blur, driven by profit and easier access to malware tools around 2024.

Read more