Global News & Analysis
A Lack of AI Governance Leads to Additional Security Risks

A report by Acuvity AI recently found that artificial intelligence (AI) security is “poorly” governed, leaving organizations open to risk. The report predicts a 50% expected data leakage through generative AI tools in the next year.
Forty-nine percent of survey respondents anticipate “shadow AI” incidents, and 41% are concerned about AI-driven insider threats. Additionally, 70% of respondents say they lack optimized AI governance.
This report also finds that AI security breaks from typical ownership models. CIOs now lead AI security in enterprises (29%), followed by Chief Data Officers (17%) and infrastructure teams (15%), while CISOs rank fourth at 14.5%. This marks a departure from other security domains, where the CISO usually holds primary responsibility.
On the budget front, AI supply chain security is the leading investment priority, with 31% of organizations selecting it as their primary focus in the next 12 months. This reflects recognition that risk spans the entire AI ecosystem, not just one component.
Top reported concerns include the use of standalone generative AI tools without IT approval (21%) and AI features embedded in SaaS applications (18%). According to the report, 31% rank AI supply chain security as their leading investment over the next 12 months, ahead of all other categories. Respondents most often cited risks in datasets, APIs, and embedded AI features, highlighting concern with exposures that occur at runtime.
Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!









