Agentic AI Browsers Exploited by “PromptFix” Trick Technique

Research from Guardio Labs has shown a new prompt injection technique is capable of deceiving a genAI model into carrying out certain actions by embedding a false a fake CAPTCHA check on a webpage with malicious instructions. This research demonstrates how Agentic AI, while performing routine tasks such as shopping online, could be manipulated into malicious landing pages or lookalike storefronts without human knowledge.
Security Leaders Weigh In
Lionel Litty, Chief Security Architect at Menlo Security:
We are seeing a seemingly endless stream of attacks against AI agents — they are gullible and they are servile. In an adversarial setting, where an AI agent may be exposed to untrusted input, this is an explosive combination. Unfortunately, the web in 2025 is very much an adversarial setting.
We are also seeing that soft guardrails, which involve providing agents with more training and refined instructions, are usually a small hurdle that can be quickly overcome. If you want to let an agent loose on the broader web, you should really have hard boundaries that limit what information the agent has access to and what it is permitted to do.
Krishna Vishnubhotla, Vice President, Product Strategy at Zimperium:
Before the arrival of genAI, attackers were already proficient at rapidly creating new domains to bypass traditional phishing detection tools. The focus was on speed and creating domains quickly to elude detection and launch attacks. However, with the rise of genAI, phishing attacks have become more sophisticated and automated, making traditional security tools increasingly ineffective, particularly on mobile browsers.
Sophistication shows up in the form of highly realistic and personalized, well-written phishing content at scale across all mobile phishing (mishing) vectors, including audio, video, and voicemail. The automation aspect allows attackers to clone websites in seconds, making brand impersonation easier than ever.
Nicole Carignan, Senior Vice President, Security & AI Strategy, and Field CISO at Darktrace:
As adversaries double down on the use and optimization of autonomous agents for attacks, human defenders will become increasingly reliant on and trusting of autonomous agents for defense. Specific types of AI can perform thousands of calculations in real time to detect suspicious behavior and perform the micro decision-making necessary to respond to and contain malicious behavior in seconds. Transparency and explainability in the AI outcomes are critical to foster a productive human-AI partnership.
David Matalon, CEO at Venn:
Remote and hybrid work have increased the threat landscape by introducing more variability: different networks, devices, and home-office setups that IT doesn’t control. This shift to BYOD and unmanaged endpoints means that traditional, in-office security models no longer suffice. While securing the browser is critical, it’s not enough. A broader approach is needed; one that protects data and applications accessed by remote workers and contractors, and not just on company-managed, locked-down computers in the office.
Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!






