What Security Leaders Say About the First AI-Developed Zero-Day Exploit

Google Threat Intelligence Group (GTIG) identified a threat actor deploying a zero-day exploit believed to be developed with AI. This marks the first time GTIG has discovered such a threat, and suggests that newer AI models could be leveraged to create exploits rather than simply discover them.
Security Leaders Weigh In
Shane Barney, Chief Information Security Officer at Keeper Security:
Google’s discovery of the first AI-generated zero-day exploit marks a meaningful threshold. The significance of the finding isn’t that the underlying technique is an entirely new proposition. It is that it confirms that AI has moved from a theoretical attack accelerator to an operational one. The targeting of a 2FA bypass warrants particular attention from security leaders who may believe that deploying Multi-Factor Authentication (MFA) amounts to operational success in cybersecurity terms.
When attackers use AI to identify high-level semantic logic flaws in authentication flows at a speed and scale no human analyst can match, the gap between having MFA and having resilient authentication becomes impossible to ignore. Recent Global Research revealed that only 35% of organizations globally have implemented phishing-resistant MFA, the FIDO2 and passkey-based methods that resist this class of attack. That’s despite nearly half (46%) identifying AI-driven attacks as their single greatest source of increased security pressure over the past year.
That sizable gap is precisely where incidents happen. AI not only lowers the skill barrier for attackers, it also systematically targets the trust assumptions that legacy authentication methods were never designed to defend against. The evolving threat landscape means it’s essential that organizations move beyond SMS codes and basic authenticator apps towards hardware-backed, phishing-resistant credentials.
Privileged access also needs to be treated as a discrete attack surface. With only 36% of organizations globally reporting full PAM deployment, that leaves a significant share of enterprises exposed to exactly the kind of privilege escalation this exploit was designed to enable.
Google’s intervention prevented a potential mass-exploitation event this time. The architecture that prevents the next one already exists. The urgency now is elevating identity resilience to a strategic priority rather than treating it as an IT-specific compliance checkbox.
Diana Kelley, Chief Information Security Officer at Noma Security:
What’s significant here is that AI is accelerating the speed, scale, and accessibility of exploit development for attackers. Tasks that once required highly specialized expertise can now be performed faster, more cheaply, and by a much broader range of threat actors. When adversaries operationalize vulnerability discovery and exploit development at machine speed, it fundamentally changes the economics of cyber offense.
For defenders, this reinforces a reality many CISOs are already struggling with: organizations cannot remediate everything at the speed vulnerabilities and attack paths are being discovered and weaponized. The bottleneck is remediation capacity, prioritization, and operational execution. That means organizations need to become much more risk-driven, focusing on attack surface reduction, asset visibility, identity controls, segmentation, and compensating controls for exposures that cannot be remediated immediately.
The broader takeaway for organizations is that this is likely an early signal, not an isolated event. The industry should expect AI-assisted vulnerability research and exploit development to become increasingly common, which means resilience, visibility, and operational readiness matter more than ever.
Ronald Lewis, Head of Cybersecurity Governance at Black Duck:
From a commercialization standpoint, the race is clearly underway: adversaries are weaponizing AI to create and scale new classes of attacks, while defenders are racing to deploy AI driven security capabilities to counter them. The dynamic is familiar. For those who lived through the early days of computer viruses and the subsequent rise of antivirus software, today’s environment feels strikingly similar — an escalating cycle of innovation on offense, followed by rapid defensive adaptation and monetization. The difference now is speed and scale: AI compresses the timeline on both sides, turning what was once a reactive update cycle into a continuous, automated arms race with significant financial incentives driving innovation across the ecosystem.
The significance of GTIG’s “first confirmed AI-developed zero day” isn’t that it enabled mass exploitation — we’ve seen that pattern for decades — but that the exploit’s creation itself appears automated. This signals a shift from human paced vulnerability discovery to machine scaled weaponization, a transition security leaders have long anticipated but failed to operationally absorb.
Zero days built for mass exploitation are nothing new — we’ve been here since Code Red, Slammer, WannaCry, and NotPetya. What makes GTIG’s finding historic is not the outcome, but the origin: the exploit itself shows the hallmarks of AI-driven discovery and weaponization. This is the moment the industry feared, predicted, and debated — and still failed to meaningfully prepare for.
What makes this scary is the fundamental truth: The emergence of an AI-developed zero day intended for mass exploitation demonstrates that current model guardrails are not stopping serious adversaries — they are merely slowing the unsophisticated ones.
Concerning the AI’s autonomy in discovering, crafting the exploit, and exploiting zero days: the real risk isn’t machines gaining intent — it’s humans handing operational control to autonomous systems that can act faster, adapt wider, and fail harder than anyone can stop. Autonomous malware doesn’t need intent to be dangerous — only speed, scale, and the absence of a human brake, all of which is hinted here.
Nicole Carignan, Senior Vice President, Security & AI Strategy, and Field CISO at Darktrace:
The latest research by the GTIG highlights that bad actors have built out an infrastructure that enables them to gain persistent, free access to premium commercial AI models. That means they can spend time building sophisticated capabilities in the best AI models and there is no limit to their usage. Compared with the more cautious approach taken by defenders, that gives a clear advantage to the attackers.
The research also highlights the arrival of malware that uses AI to understand their operating environments and adapt as they go, a high-risk new form of malware. Today, this type of AI-enabled malware is noisy and consequently easy to see. As Attackers capabilities with AI continue to advance those attacks will become easier to mask. Defenders need to adapt away from security approaches that expect attacks to contain set signatures, and towards one out of place behavior.
Ram Varadarajan, CEO at Acalvio:
AI-powered cyberattacks have moved from theory to reality. GTIG has confirmed the first known zero-day exploit developed with AI assistance, and early clues, like fake vulnerability scores and oddly over-explained code, revealed the fingerprints of a large language model. But those clues are temporary — attackers will quickly learn to hide them.
The larger concern is what today’s AI systems can actually do. Modern models no longer just scan code for technical mistakes. They can infer what developers intended the software to do and spot contradictions humans missed. That makes a new category of vulnerabilities far easier to find: hidden business-logic flaws, broken trust assumptions, and authorization errors that appear perfectly valid to conventional security tools but can still be exploited.
We're facing an “assume compromise” future within cybersecurity. Our best defense will be to engage these attacks bot-on-bot inside the perimeter, with active defense keyed by AI itself.
John Gallagher, Vice President of Viakoo Labs at Viakoo:
The Google Cloud report illustrates that AI is fundamentally altering the offensive capabilities of threat actors, especially with respect to speed of attacks. The future of cybersecurity, particularly for the large and vulnerable fleets of OT and IoT device, depends on fighting AI-driven threats with AI-powered, autonomous remediation.
Most concerning is the on-the-fly use of media and content creation to achieve the AI model’s objective. This brings AI-driven threats well beyond the typical cyberattack where data is stolen or devices are taken offline. This can now extend such campaigns into ongoing manipulation of large populations as part of an AI-driven attack. The potential for this is enormous.
Simply knowing a vulnerability exists is no longer enough. The speed of AI-driven exploits demands that organizations close the “Action Gap” between discovery and remediation.
There are things that cyber defenders can do to improve their defenses against AI-driven threats. Security teams must deploy platforms capable of safely automating the remediation process, such as pushing verified firmware updates to thousands of OT endpoints simultaneously. Having this performed as autonomously as possible (with humans remaining in the loop for decision making) is crucial to combat the speed at which AI-driven threats can unfold.
While attacks may be fully autonomous, defense should rely on AI-enabled precision and speed for human decision-makers. AI should serve up the remediation options, with human operators making the critical approval decisions.
Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!









