AI Introduces Security Vulnerabilities Within Code in 45% of Cases
.webp?t=1753897933)
Markus Spiske via Unsplash
A recent report by Veracode found critical security flaws in AI-generated code. The study revealed that while AI produces functional code, it introduces security vulnerabilities in 45% of cases.
The research demonstrates a troubling pattern: when given a choice between a secure and insecure method to write code, GenAI models chose the insecure option 45% of the time. Research also uncovered a critical trend: despite advances in LLMs’ ability to generate syntactically correct code, security performance has not kept up, remaining unchanged over time.
AI is enabling attackers to identify and exploit security vulnerabilities quicker and more effectively. Tools powered by AI can scan systems at scale, identify weaknesses, and even generate exploit code with minimal human input. This lowers the barrier to entry for less-skilled attackers and increases the speed and sophistication of attacks, posing a significant threat to traditional security defenses. Not only are vulnerabilities increasing, but the ability to exploit them is becoming easier.
Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!





