There’s a well-known theory that cockroaches can survive basically anything - even a nuclear explosion. While that theory only rings true to a point, their simple body composition makes them extremely hardy for their size, and difficult to eradicate under most conditions.
I’ve been thinking… if cockroaches had an equivalent in the digital world, it would have to be SQL injection (SQLi) vulnerabilities in code. This has been a known vulnerability for more than twenty years, yet organizations fall victim to them time and time again. The widespread, costly attack on Target was the result of SQL injection, as was an instance of election hacking in Illinois in which 200,000 voter records were exposed, prompting the FBI to recommend all IT admins work quickly to strengthen their security practices.
Imperva’s Hacker Intelligence Initiative Report revealed that between 2005 and 2011, SQLi attacks were used in 83% of all reported data breaches. Today, injection vulnerabilities remain the number-one threat in the OWASP Top 10. They are relatively simple, yet they just won’t die.
It seems ridiculous that this same vulnerability is still appearing in a significant number of application security scans. We know how it operates, and we know how to stop it. How is this possible? The truth is, our software security has a vast amount of room for improvement.
Veracode’s State of Software Security Report - based on 400,000 application scans in 2017 – revealed an alarming statistic: just 30% of applications passed OWASP Top 10 policy. This has been a consistent theme over the past five years, with SQL injections appearing in almost 1 in 3 newly scanned applications. This is evidence of an endemic issue; we are not learning from our mistakes, and CISOs seem to face an uphill battle in being able to procure enough security talent. Typically, the ratio of AppSec specialists to developers is an inadequate 1:100.
Why is software security on life support?
It’s no secret that specialist security talent is scarce, but we must also pay attention to the fact that developers are not fixing issues as they arise, and are quite clearly ill-equipped to not introduce vulnerabilities in the first place. In the same Veracode report, it was divulged that there were documented mitigations for just 14.4% of all development vulnerabilities. In other words, most vulnerabilities were submitted without development mitigation. Fewer than one-third of vulnerabilities were closed in the first 90 days, and 42% of vulnerabilities were never closed within the development period.
I speak to security professionals, CISOs and CEOs all the time, and anecdotally, I’ve become aware that many companies become so frustrated with the number of vulnerabilities found that can’t be mitigated (in addition to the scourge known as false positives), that they stop scanning for them altogether, cross their fingers and hoping for the best.
Why do AppSec professionals let this happen?
Make no mistake: AppSec people are painfully aware of problems in code. After all, that’s one of their core skills that makes them such a valuable team resource. However, they are often hamstrung by several factors.
For instance, an AppSec manager will find an issue and ask the developer, “can you fix the code?”. The answer to this important question differs from organization to organization, but in general, the developer is so stretched meeting strict feature delivery sprints that they simply don’t have the time to fix these problems, nor decent tools to help them. AppSec professionals themselves might be able to identify vulnerabilities, but they often don’t have the skills and/or access to remediate them on the spot.
We must also realize that for every problem, there is a process of needing to find a solution, implement it, and then test it. For even the slightest issue found in the code, the time it can take to fix, not mention the resources required, is immense. There are over 700 vulnerabilities that can be introduced into software, and it’s simply impossible for any one person to be able to defend against them all. It is for this reason that most companies stick to following the OWASP Top 10 only. All the while, developers keep building features and in turn, they keep introducing vulnerabilities into the code they write.
What is the solution?
The simple fact is, we don’t give our developers the tools and training to foster secure coding success. There are no regulations that force organizations to ensure developers possess adequate security skills, and it’s a sad reality that most universities and internships don’t prepare junior developers to code securely, either.
When someone wants to fly a plane, there is a very rigorous process that ensures training, practical experience, medical checks, safety knowledge and exams before they are able to fly. No-one would dare imagine that they would be let loose in the sky without this extensive preparation and validation of skills, yet this is what happens on a day-to-day basis with code writing.
We need to spend the time to educate developers on writing secure code. However, in today's world, where software development is fast-paced as well as good developers and security professionals existing in short supply, it never seems to be a priority. It’s time we changed the conversation.
A recent headline from the World Economic Forum screamed: “There can be no digital economy without security”, with the accompanying content arguing the need for security to be a core part of any digital transformation strategy. “Security is what protects businesses, allowing them to innovate, build new products and services. Beyond a defensive role, security provides businesses with a strategic growth advantage.”
Improving secure coding skills and outcomes will add a powerful layer of cyber protection for organizations, assisting them in creating better, faster code. Developers don’t need to become security experts, but they must be empowered positively and practically to be the first line of defense against cyberattacks. Developers can be the next security and innovation heroes. They are very clever people, they are creative problem-solvers, and generally keen to build their skills. Play to their strengths with the specialized training they deserve, and commit to a higher software security standard.