Exploring the Top Cyber Threats Facing Agentic AI Systems
For better or worse, Agentic AI systems, meaning programs that utilize large language models to interpret natural language instructions and perform automated tasks, have steadily increased their level of contribution in the workplace. Automatic browser agents, as we have discussed in previous commentaries, are often given very high levels of access and authority in order to perform the tasks they have been asked to automate, including access to emails and even payment data. These agents no doubt provide convenience to their users, but they also contribute their own set of entirely new cybersecurity risks. Those risks were delineated in detail by the chief architect at Protect AI, a platform for AI security that was recently acquired by Palo Alto Networks, Sean Morgan, during the AI Summit at the Black Hat USA conference held in Las Vegas last week.
During his briefing, Morgan divided the risks facing Agentic AI into three main categories: corrupted context within an instruction to the AI agent, dynamic tool sourcing and supply chain risks, and authentication and authorization mistakes throughout the AI agent’s control flow. Of these, the most critical is context corruption. Morgan noted that LLMs have no built-in capacity to distinguish between legitimate instructions and malicious interventions and cannot determine whether the instructions they receive are coming from the intended user or not. This means that an attacker is capable of overwriting user instructions and substituting their own, similar to an SQL injection attack. Nor is this limited to a single text interaction: Morgan highlighted that context can be corrupted through multiple channels, including chat histories, long-term memory interactions, vector databases, document repositories and even outputs from other AI agents. This was demonstrated by EchoLeak, a zero-click AI vulnerability discovered in June that used Copilot’s context to exfiltrate data by bypassing the LLM’s security measures.
Dynamic tool sourcing introduces additional complexity problems to the security issue by allowing AI agents to select and combine tools to accomplish tasks autonomously. While security personnel might be able to vet the tools that an AI agent has access to, threat actors can introduce new tools or altering the behavior of existing tools to create new supply chain risks. Morgan described this as an “MCP bug hole.” The main strength of AI agents is the ability to combine resources to autonomously solve problems, but this strength is also a serious security hole.
Last is the problem of authorization. Authorization is simple enough when it comes to a single human user authenticating their identity for use of a tool, but bringing agents into play makes matters more difficult. For the agent to perform its functions, it has to receive certain permissions from the user. This orchestrating agent then often has to employ multiple specialized sub-agents to automate its jobs. Determining how to dynamically adjust permissions across sub-agents is a complex problem, and one that can be exploited by threat actors if they find a way to insert their own sub-agent into the mix, or facilitate threat activity via multi-agent interactions.
Morgan also laid out several recommendations for the future of securing Agentic AI systems. Chief among these were: the maintaining of visibility and control of all instruction context, threat modeling of agentic deployments and SaaS solutions, and the development of protocols and systems that facilitate proper authentication and authorization control. While AI does present these security risks, that doesn’t mean that the problem of Agentic AI is an insurmountable security problem. With the proper investment of time and energy into the issue, we can fix these problems and create a new secure AI protocol to reap the benefits of automation in the future.