Agentic AI Security Is Complicated, and the Hyper-Scalers Know It

Microsoft admitted that AI agents are a security risk with the launch of Agent 365, a unified control plane for AI agents that promises governance through familiar tools like Entra and Purview.
This is a necessary first step for securing AI agents, but it’s not the finish line. In fact, Gartner predicts that 40% of organizations will soon abandon their agentic AI projects due to these exact security concerns. Microsoft is attempting to stem the bleeding with Agent365, but it’s complicated: even with Entra IDs assigned to every agent, discovery and lifecycle management remain partial solutions.
Instead of relying solely on the hyper-scalers, organizations should follow these three best practices to lay the groundwork for secure and efficient adoption and use.
1. Implement Comprehensive, Holistic Data Governance
According to research, only 30% of AI-adopting organizations classify and protect data effectively, and IBM has found that 63% of AI-adopting organizations lack an AI governance framework entirely, leading to widespread AI-related breaches across regions and industries.
True governance requires a framework that manages your entire data estate — from creation to deletion — regardless of the cloud platform your agents are built on. To prevent risks, you need automated data classification and access controls that protect the data itself, not just the agents accessing it. These measures require upfront investment but are still more effective and less costly than the reactive solutions that the hyper-scalers are scrambling to provide. The benefits of holistic data governance also extend beyond agentic AI. With better organized and regulated data, you’ll limit risks, reduce storage costs, and increase efficiency across your entire organization.
2. Train Your Teams on Agentic AI Governance and Security Best Practices
AI security and governance frameworks won’t be truly effective until they’ve been socialized throughout an organization, which is why enablement programs are a crucial step to secure agentic AI adoption. This is true of all AI products, but it’s particularly true of agentic AI tools, since agentic AI’s autonomy and decision-making authority create new and novel risks.
Your first line of defense is an educated workforce that knows your organization’s policies. That’s why organizations should conduct targeted training on technical and ethical risks of agentic AI, organize cross-functional incident response exercises, and provide regular updates on regulations. Ongoing education and hands-on experience make it easier to identify threats and adapt quickly to new compliance requirements, which limits risks in unique and important ways.
3. Integrate Additional Agent Oversight
“Native” security often means “locked-in” security. Relying solely on platform-native controls leaves you blind to the fact that most teams are using tools across clouds. With agnostic oversight from a third-party, you ensure your security posture isn’t dictated by a single vendor’s roadmap, allowing you to govern agents consistently whether they live in Azure, AWS, GCP or anywhere else.
These solutions can often integrate seamlessly with existing security infrastructure and provide transparent reporting, which helps organizations quickly identify and mitigate anomalous agent activities. Unlike the reactive fixes from big vendors, which often lack flexibility and comprehensive coverage, third-party tools can provide more independence and allow you to maintain a consistent, organization-specific security posture across all AI deployments. It’s also critical to establish guardrails for unmanaged agents — not just agents your organization is deliberately publishing — which third-party providers often excel at.
While the hyper-scalers are used to “moving fast and breaking things” (as Mark Zuckerberg once said) or creating a culture where “everyone is a maker” (as they say at Microsoft), other providers are used to managing the risk that these innovations inadvertently create. That’s another reason why it’s important to integrate additional agent oversight for both managed and unmanaged agents alike.
Before Turning to a Reactive Solution, Lay the Groundwork for Success
At the end of the day, agentic AI vendors and organizations that use agentic AI both want the technology to work safely and effectively, but that doesn’t mean that organizations can or should rely on big vendors to fix the problems they’ve created. Instead of rushing to plug gaps with reactive solutions, organizations need to first focus on implementing basic, fundamental controls that limit agentic AI risks.
