65% of the Forbes AI 50 List Leaked Sensitive Information

Are you concerned about the security of top AI companies? It turns out, those worries may not be unfounded.
Research from Wiz has revealed that nearly two-thirds (65%) of private AI companies listed in the Forbes AI 50 had leaked sensitive information on GitHub.
“Think API keys, tokens, and sensitive credentials, often buried deep in deleted forks, gists, and developer repos most scanners never touch,” the research states. “Some of these leaks could have exposed organizational structures, training data, or even private models.”
With the development and evolution of AI accelerating, cybersecurity teams are finding themselves in a new risk frontier.
AI Amplifies Vulnerabilities
Randolph Barr, Chief Information Security Officer at Cequence Security, states, “Wiz’s finding that 65% of leading AI firms have leaked sensitive information isn’t a new kind of vulnerability, it’s the predictable consequence of hyper-speed AI development colliding with long-standing security debt. The majority of these exposures stem from traditional weaknesses such as misconfigurations, unpatched dependencies, and exposed API keys in developer repositories. What’s changed is the scale and impact. In AI environments, a single leaked key doesn’t just expose infrastructure; it can unlock private training data, model weights, or inference endpoints, the intellectual property that defines a company’s competitive advantage. As AI workloads scale across cloud environments, these once-contained issues now have global reach and real economic impact.
“AI hasn’t reinvented the concept of a vulnerability, it has amplified it. About two-thirds of current AI-related incidents still originate from traditional weaknesses, but the remaining third are uniquely ‘AI-native.’ These include model and data poisoning, prompt injection, and autonomous agents that can chain together API calls and act with minimal human oversight. These emerging risks reflect the reality that AI systems are dynamic, self-learning, and interconnected in ways traditional applications never were. When paired with the rapid speed of development, the outcome is a growing attack surface that grows faster than most security programs can respond.”
Reducing AI Risks
Shane Barney, Chief Information Security Officer at Keeper Security, shares, “As organizations adopt AI and cloud-native development, the number of non-human accounts and automated processes continues to rise. These machine identities are critical to modern operations, yet they often exist outside traditional identity and access management frameworks. When visibility into those credentials is limited, risk spreads quietly across systems that are otherwise well protected.
“Reducing that risk requires sustained visibility and control, as well as a centralized enterprise-level approach to managing secrets. Continuous monitoring for exposed secrets, automated credential rotation and least-privilege access policies help contain exposure without slowing innovation. Treating machine-based credentials with the same rigor applied to human users strengthens both resilience and operational trust.
“Implementing Privileged Access Management (PAM) in conjunction with secrets management extends that visibility and control even further. PAM enforces strict access boundaries and accountability for elevated permissions, while secrets management ensures that credentials used by systems and applications are securely stored, rotated and monitored. Together, these controls create a unified framework for managing both human and non-human identities, reducing credential sprawl and limiting the potential impact of an exposure.”
Security and AI Innovation: Looking to the Future
AI is here to stay, and it is always moving forward. However, in order for AI to be a help rather than a hinderance, organizations that utilize AI models must implement the security essentials in order to minimize risk.
Barr states, “Ultimately, if hyper-development is inevitable, so too must be hyper-defense. That means automating the fundamentals, secret hygiene, access control, anomaly detection, and policy enforcement, so human teams can focus on governance and strategic oversight. The organizations that succeed won’t be those that slow AI innovation, but those that secure it at the same speed it evolves.”
Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!






