Should Organizations Block AI Browsers? Security Leaders Discuss

Recent research from Gartner encourages organizations to block agentic browsers (or AI browsers). While these browsers could transform user interaction with websites, they could also introduce considerable cybersecurity risks. Therefore, the report asserts that “CISOs must block all AI browsers in the foreseeable future to minimize risk exposure.”
“There’s good reason to be cautious about AI-powered browsers; they introduce a new class of risks that enterprises aren’t fully prepared for,” says Lionel Litty, Chief Information Security Officer and Chief Security Architect at Menlo Security. “Even if you trust the AI browser vendor and are comfortable with data sharing, you need hard guardrails around how the browser operates. Limit the sites it can reach, apply strict DLP controls, and scan anything it downloads. And make sure you have a strategy to defend these browsers against vulnerabilities. They can be led astray to dark corners of the web, and URL filtering alone isn’t enough.”
Earlier this year, research revealed how AI browsers could be manipulated into interacting with malicious landing pages. The Guardio Labs research revealed “an attack surface far wider than anything we’ve faced before, where breaking one AI model could mean compromising millions of users simultaneously.”
The risks associated with agentic AI are considerable, especially as development accelerates at a pace security measures aren’t keeping.
Randolph Barr, Chief Information Security Officer at Cequence Security, shares, “As organizations rapidly adopt agentic AI, Model Context Protocol (MCP), and autonomous browsing capabilities, we’re seeing a pattern develop: AI-native browsers are introducing system-level behaviors that traditional browsers have intentionally restricted for decades. That shift breaks long-standing assumptions about how secure a browser environment is supposed to be.
“But the real exposure emerges when individuals install AI browsers on their personal devices. We know from every technology adoption wave, cloud apps, messaging platforms, AI assistants, that employees first test these tools at home. With AI browsers, curiosity will drive rapid experimentation. Once users become comfortable with these tools at home, those behaviors inevitably bleed into the workplace through BYOD access, browser sync features, or personal devices used for remote work.
“What’s more concerning is how easy AI browsers are to detect and how quickly adversaries can scale that detection. AI browsers introduce unique fingerprints in their APIs, extensions, DOM behavior, network patterns, and agentic actions. Attackers can identify them with a few lines of JavaScript or by probing for AI-specific behaviors that differ from traditional browsers. With AI-driven classification models, bad actors can now fingerprint AI browsers across millions of sessions automatically. At scale, that enables targeted attacks against users running these higher-risk, agent-enabled environments.
“This underscores why enterprises remain cautious. AI browsers are evolving faster than the guardrails that traditionally protect end users and corporate environments. Transparency around system-level capabilities, independent audits, and the ability to fully control or disable embedded extensions are table stakes if these browsers want to be considered for regulated or sensitive workflows.”
As AI agents grow more and more commonplace, security leaders must prepare their organizations by understanding the benefits, risks, and best practices for implementation.
“We are approaching a future where the use of AI agents will outpace the readiness of security measures,” says Barr. “Advisories like this help highlight the gaps and hopefully drive the industry toward more secure, transparent designs before these tools become deeply embedded in enterprise ecosystems.”
Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!








