Security Magazine logo
search
cart
facebook twitter linkedin youtube
  • Sign In
  • Create Account
  • Sign Out
  • My Account
Security Magazine logo
  • NEWS
    • Security Newswire
    • Technologies & Solutions
  • MANAGEMENT
    • Leadership Management
    • Enterprise Services
    • Security Education & Training
    • Logical Security
    • Security & Business Resilience
    • Profiles in Excellence
  • PHYSICAL
    • Access Management
    • Fire & Life Safety
    • Identity Management
    • Physical Security
    • Video Surveillance
    • Case Studies (Physical)
  • CYBER
    • Cybersecurity News
    • More
  • BLOG
  • COLUMNS
    • Career Intelligence
    • Cyber Tactics
    • Cybersecurity Education & Training
    • Leadership & Management
    • Security Talk
  • EXCLUSIVES
    • Annual Guarding Report
    • Most Influential People in Security
    • The Security Benchmark Report
    • Top Guard and Security Officer Companies
    • Top Cybersecurity Leaders
    • Women in Security
  • SECTORS
    • Arenas / Stadiums / Leagues / Entertainment
    • Banking/Finance/Insurance
    • Construction, Real Estate, Property Management
    • Education: K-12
    • Education: University
    • Government: Federal, State and Local
    • Hospitality & Casinos
    • Hospitals & Medical Centers
    • Infrastructure:Electric,Gas & Water
    • Ports: Sea, Land, & Air
    • Retail/Restaurants/Convenience
    • Transportation/Logistics/Supply Chain/Distribution/ Warehousing
  • EVENTS
    • Industry Events
    • Webinars
    • Solutions by Sector
    • Security 500 Conference
  • MEDIA
    • Interactive Spotlight
    • Photo Galleries
    • Podcasts
    • Polls
    • Videos
      • Cybersecurity & Geopolitical Discussion
      • Ask Me Anything (AMA) Series
  • MORE
    • Call for Entries
    • Classifieds & Job Listings
    • Newsletter
    • Sponsor Insights
    • Store
    • White Papers
  • EMAG
    • eMagazine
    • This Month's Content
    • Advertise
  • SIGN UP!
CybersecurityLogical SecuritySecurity & Business Resilience

What Claude and OpenClaw Vulnerabilities Reveal About AI Agents

By Elad Luz
AI chip up close
Igor Omilaev via Unsplash
April 24, 2026

Two recent vulnerability disclosures in Claude and OpenClaw forced a question every security team should be asking: if a single manipulated input can silently compromise everything an AI agent can reach, are you governing those agents like the privileged accounts they are?

When security teams think about privileged accounts, they think about service accounts with domain admin rights. What they typically don’t think about is the AI agent their developers installed. These are the agents that can read Slack messages, execute commands, access calendars, and search through months of sensitive conversations. That blind spot is exactly where attackers look. 

The Oasis Threat Research Team has disclosed two separate vulnerability chains, one targeting Claude, Anthropic’s widely used AI assistant, and one targeting OpenClaw, the open-source AI agent that has accumulated millions of users. Although the technical details differ, the underlying lesson doesn’t.

AI agents act autonomously, hold credentials, and make decisions on behalf of the humans and organizations that deploy them. From what we’ve seen, adoption is already reflecting this shift, with 79% of organizations already using AI agents and 28.6 million active agents deployed across enterprises globally in 2025 alone, a number projected to grow to over 2.2 billion by 2030. The pace shows no sign of slowing. In both investigations our team conducted, a single manipulated input was enough to compromise everything those agents could reach.

What Do These Flaws Actually Tell Us?

In the Claude investigation, our team uncovered three vulnerabilities that, when chained together, created a complete attack pipeline we called Claudy Day. An attacker crafts a Google search ad that looks completely legitimate. A user clicks it. Behind that click is a pre-filled chat link carrying hidden instructions, invisible in the interface but fully processed by Claude. Without any indication that something is wrong, Claude searches the user’s conversation history, extracts sensitive information, and sends it to the attacker. No special tools, no suspicious prompts, no warning signs.

The OpenClaw investigation started from a different place but arrived at the same destination. OpenClaw runs a local gateway on the developer’s machine that trusts connections from localhost. That assumption was reasonable for its intended use. What it did not account for is that any website a developer visits can silently reach that same gateway through the browser. Our team connected, brute-forced the password without triggering a single alert, and took full control of the agent. The user saw nothing. The whole thing started with an ordinary website visit.

Two platforms built differently, governed differently, used for entirely different purposes, seemingly creating the same. A single manipulated input was all it took to compromise everything the agent could reach. Both vendors responded responsibly and quickly, but the speed of the patch is not the point. These vulnerabilities were symptoms of a deeper structural problem that patching alone cannot fix.

The Risk Multiples with Access

Both attacks were demonstrated in bare-bones configurations. In production, neither agent operates in isolation. When an AI agent is connected to enterprise tools, corporate APIs, or MCP servers, its effective permissions become the union of everything it can reach. 

A compromised Claude session with integrations enabled can read files, send messages, and interact with every connected service before the user has time to react. A compromised OpenClaw instance can dump credentials, search messaging histories, and execute system commands across the environment. The vulnerability is not just in the agent. It is in what the agent is allowed to touch.

Attackers are also exploiting trust mechanisms. The Claudy Day delivery chain worked because the prompt the user submitted was not necessarily the prompt the agent received, breaking prompt integrity and undermining trust in the agent’s inputs. The OpenClaw attack worked because localhost is inherently trusted, a design assumption that was reasonable until it was not. AI attack surfaces extend beyond software into how users perceive legitimacy and what assumptions architects made when they built the system.

What Effective Governance Actually Looks Like

The uncomfortable truth is that governance has not kept pace with deployment. Nearly 74% of companies plan to deploy agentic AI within two years, yet only 21% have a mature model for governing them. Vulnerabilities like Claudy Day and OpenClaw reinforce the risk of that governance gap, especially for organizations deploying AI agents without treating them as privileged identities.

These attack chains were preventable, not at the vulnerability level, but at the governance level. Organizations with appropriate controls in place would have been harder to exploit and faster to detect. Here’s what that actually requires:

  1. Inventory what you are running to know which AI agents are active, what they can access, and what credentials they hold.
  2. Treat AI agents as privileged identities with policies, access controls, scoped permissions, and audit trails.
  3. Require explicit approval for sensitive actions so agents cannot access memory, call APIs, read files, or send messages without authorization.
  4. Scope permissions to the minimum necessary access to limit the impact if something goes wrong.
  5. Log all agent actions and review them to ensure visibility and enable detection and forensic response.
  6. Educate users that links, shared URLs, and search results can carry hidden instructions that AI agents will execute.

Looking Forward

Claude and OpenClaw are different platforms serving different use cases. What they share is that they’re both AI agents with entities that receive inputs, take autonomous actions, and hold access to things that matter. The pattern our team identified across both investigations is not a coincidence. It is a signal that the industry is facing a systemic challenge it has not yet built the frameworks to address.

The organizations that take that seriously now, that inventory their agents, scope their permissions, and govern them like the privileged identities they are, will be materially harder to compromise than those that wait for an incident to force the issue. That window is closing faster than most security teams realize.

KEYWORDS: artificial intelligence (AI) Artificial Intelligence (AI) Security governance

Share This Story

Elad luz headshot

Elad Luz is Head of Research at Oasis Security. Image courtesy of Luz

Blog Topics

Security Blog

On the Track of OSAC

Blog Roll

Security Industry Association

Security Magazine's Daily News

SIA FREE Email News

SDM Blog

Manage My Account
  • Security Newsletter
  • eMagazine Subscriptions
  • Manage My Preferences
  • Online Registration
  • Mobile App
  • Subscription Customer Service

More Videos

Popular Stories

Cables plugged in

Chinese Supercomputer Allegedly Hacked, 10 Petabytes of Data Stolen

Man on laptop

Healthcare Executives Face a New Era of Personal Risk

Abstract shape

What Are Security Experts Saying About Claude Mythos and Project Glasswing?

Padlock with computer keys

Breach of FBI Surveillance System Considered a “Major Incident,” Security Experts Weigh In

AI

AI Startup Mercor, Which Works With Open AI and Anthropic, Confirms Data Breach

SEC 2026 Benchmark Banner
SEC 2026 Benchmark Banner

Events

April 30, 2026

Building a Campus-Wide Culture of Security and Shared Responsibility

In today’s higher education environment, where institutions face evolving and multifaceted incidents, safety must be embedded into the fabric of campus culture. Learn strategies for generating collective buy-in from faculty, staff, students and senior leadership. 

May 7, 2026

Beyond Cameras: Revolutionizing Perimeter Security with LiDAR, AI and Digital Twins

In this webinar, we will explore how LiDAR‑based detection, AI‑powered analytics and digital twins are transforming the future of perimeter protection with 3D detection, real-time situational awareness and unified operational views.

View All Submit An Event

Products

Security Culture: A How-to Guide for Improving Security Culture and Dealing with People Risk in Your Organisation

Security Culture: A How-to Guide for Improving Security Culture and Dealing with People Risk in Your Organisation

See More Products
SEC 2026 Top Cybersecurity Leaders
×

Sign-up to receive top management & result-driven techniques in the industry.

Join over 20,000+ industry leaders who receive our premium content.

SIGN UP TODAY!
  • RESOURCES
    • Advertise
    • Contact Us
    • Store
    • Want More
  • SIGN UP TODAY
    • Create Account
    • eMagazine
    • Newsletter
    • Customer Service
    • Manage Preferences
  • SERVICES
    • Marketing Services
    • Reprints
    • Market Research
    • List Rental
    • Survey/Respondent Access
  • STAY CONNECTED
    • LinkedIn
    • Facebook
    • YouTube
    • X (Twitter)
  • PRIVACY
    • PRIVACY POLICY
    • TERMS & CONDITIONS
    • DO NOT SELL MY PERSONAL INFORMATION
    • PRIVACY REQUEST
    • ACCESSIBILITY

Copyright ©2026. All Rights Reserved BNP Media, Inc. and BNP Media II, LLC.

Design, CMS, Hosting & Web Development :: ePublishing