58% of Organizations Spend Over 10 Hours a Month Securing AI-generated Code

A recent report by Cloudsmith found that 31% of organizations using AI-generated code spend 10 hours or less per month validating, auditing, or securing it – including 5% who do not explicitly audit AI code at all. The risks posed by weak software supply chain security have become increasingly clear in the past 12 months. With threat campaigns including Shai Hulud 2.0 and SANDWORM_MODE specifically targeting the software supply chain via upstream repositories, 44% of respondents have experienced a security incident caused by a third-party dependency.
In the same time period, 44% of respondents reported their organization spent over 50 hours per month investigating potential security issues linked to third-party dependencies, whether or not they resulted in a breach.
Confidence in AI-generated code is also lacking. 58% of respondents spend at least 11 hours per month validating and securing AI-generated code — rising to over 40 hours for 8% of respondents — as teams work to catch hidden dependencies and potential vulnerabilities. In fact, 17% are very confident that AI is not introducing new vulnerabilities into their codebase.
These concerns are well-founded, as AI is known to introduce risks in software development by generating insecure or incorrect code, including “slopsquatting” — where models hallucinate non-existent package names that attackers can then register and exploit — embedding hidden vulnerabilities that can compromise systems.
In addition to growing exploitation of third-party dependencies and concerns about the adoption of AI, there are a wider range of issues putting pressure on the software supply chain. With the arrival of new legislation like the EU’s Cyber Resilience Act, companies have an incredibly tight deadline to respond to cyberattacks. This involves the obligation to provide a detailed assessment 48 hours after becoming aware of a breach. To do so, organizations will need to provide provenance data with little to no notice.
Despite this, however, research shows that, if they were hit with a surprise audit tomorrow, 53% of respondents could only produce a comprehensive report of artifact versions, origins, and security attestations with a significant amount of manual effort or time. This is a particularly significant gap, given the number of organizations that are committing AI-generated code to production without understanding exactly how it functions, or why it was created.
In addition to these findings, the report also reveals respondents’ plans for the future. The top three challenges respondents expect to face this year are:
- Ensuring builds and releases remain available during spikes and third-party outages (21%).
- Meeting new regulatory standards (NIS2, FedRAMP) and securing the supply chain (20%)
- Reducing cloud spend and consolidating toolchains (19%).
Meanwhile, the top three areas in which respondents plan to increase investment are:
- Security Scanning (SCA/SAST) (29%).
- AI/ML Ops Infrastructure (29%).
- Internal Developer Portal (IDP) (13%).
Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!




.webp?height=200&t=1692281098&width=200)

