Stealthy attack serves poisoned web pages only to AI agents

September 10, 2025

AI agents have been a frequent subject of discussion on these pages. Many users are delighted at the prospect of an AI assistant to automate work for them, heedless of the potential security dangers. Many of these dangers are repetitions of older types of danger, including SEO fraud, malvertising, and typosquatting. However, a new report from JFrog AI architect Shaked Zychlinski has raised the possibility of a totally new attack paradigm totally invisible to human eyes: the parallel-poisoned web.

The attack, known as an indirect prompt-injection poisoning attack, functions similarly to known forms of prompt injection attack, where a malicious prompt is delivered in a way that is invisible to humans but visible to AI agents. By moving this attack type into the browser environment, rather than somewhere like invisible text in emails, this attack is much more focused on browser assistant-type agents, which usually have access to privileged information from their owners, including credentials and financial data. The theory behind the attack is based on the principle of browser fingerprinting, a common technique used to identify users on webpages. Going beyond simple cookie files, browser fingerprinting collects a wide range of data points, including user-agent strings, installed fonts, screen resolution, plugins, and language settings. This method is already used by threat actors for so-called cloaking techniques, allowing the display of malicious phishing content only to the actors’ intended targets in order to foil analysis.

Zychlinski has theorized that browser AI agents have fingerprints of their own: unique, predictable signs that indicate a browser is being used by an AI agent. Not only have some companies publicly declared their agent fingerprints, even when this is not the case, identifying an AI agent can be done in multiple ways, including behavior signatures such as human-impossible mouse movement or instant form filling, tracking extension IDs, and the identification of specific properties in the browser’s Document Object Model. It is also possible to pinpoint what LLM is specifically in use through processes such as the LLMmap technique which sends specific queries to an application and identifies the patterns in the response to determine the model and version. Using this knowledge, it is not only possible to autonomously embed adversarial instructions into websites that will only be present when observed by a browser with the correct fingerprint, but theoretically it is also possible to have multiple sets of instructions specifically calibrated to exploit the vulnerabilities of multiple different browser agents using different LLMs. So far, this concept is purely theoretical, but prompt-injection attacks themselves are quite practical and have been observed frequently in the wild. Steps will have to be taken in the future to help mitigate the risk posed by the parallel-poisoned web.

More from Blackwired

September 3, 2025

First AI-Powered Ransomware Created Using OpenAI's gpt-oss:20b Model

PromptLock is an AI-powered ransomware PoC using LLMs to generate dynamic, hard-to-detect, cross-platform attacks.

Read more
August 27, 2025

Chinese Hackers Silk Typhoon Escalate Cloud and Telecom Espionage

Silk Typhoon targets cloud via zero-days, supply chains, and trusted ties; monitor edge, patch fast to detect and defend.

Read more
August 20, 2025

For $40, you can buy stolen police and government email accounts

Compromised government emails sold cheaply online risk major abuse; MFA & behavior analysis needed to detect and prevent misuse.

Read more