Global News & Analysis
Generative AI Remains Growing Concern for Organizations

Cobalt released its State of LLM Security Report 2025, which reveals a widening readiness gap in enterprise security as the rapid adoption of generative AI (genAI) outpaces defenders’ ability to secure it. Thirty-six percent of security leaders and practitioners admit that genAI is moving faster than their teams can manage as organizations continue to embed AI deep into core business operations.
The report found that 48% of respondents believe a “strategic pause” is needed to recalibrate defenses against genAI-driven threats. In addition, 72% of respondents cite genAI-related attacks as their top IT risk, but 33% are still not conducting regular security assessments, including penetration testing, for their LLM deployments.
Half of respondents want more transparency from software suppliers about how they detect and prevent vulnerabilities, signaling a growing trust gap in the AI supply chain.
Security leaders (C-Suite and VP level) are more concerned about long-term genAI threats like adversarial attacks (76%) versus the 68% of practitioners which expressed the same concern. However, when it came to near-term operational risks such as inaccurate outputs, 45% of practitioners expressed concern versus 36% of security leaders.
Top concerns among all survey respondents include sensitive information disclosure (46%), model poisoning or theft (42%), and training data leakage (37%), all pointing to an urgent need to protect the integrity of data pipelines.
Overall, 69% of serious findings across all pentest categories are resolved but this falls to just 21% of the high-severity vulnerabilities found in LLM pentests. This is a concern given that 32% of LLM pentest findings are serious and are the lowest resolution rate across all test types conducted by Cobalt.
Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!








