Just more than 20 years ago I announced to my colleagues in the FBI – after 10 years of service – that I was leaving for the private sector. “What!” they exclaimed. “Are you crazy, you’re halfway to your pension…not to mention that the life expectancy of a CISO is only about 18 months. Notwithstanding your best efforts, you stand a good chance of being unemployed and on the streets with nothing!” I acknowledged that they were making some legitimate points. Even in those days we had a growing sense that the anemic performance of signature-based antivirus solutions meant that compromise was almost inevitable. I left, nonetheless, and with that reality in front of me, prudently managed the expectations of my leadership teams with what’s since become a widely-held and often recited mantra in the industry: “Not if, but when.” It’s not if we’re going to be compromised, but a matter of when!

As galling as that was to admit professionally, it was borne out in every company I subsequently served. In fact, a Verizon report noted that 90 percent of all compromises are due to malware. That we were failing so egregiously that high up in what became known as the “kill chain” meant that we had to adapt, to build complex downstream structures to compensate for that failure up above – what we called a Defense-in-Depth (DiD) strategy. That sense of inevitability also made itself manifest in such models as those now advanced by NIST, where at least the Detect, Respond and Mitigate portions of the model carry an embedded presupposition that something bad has already happened. These structures are complex, resource-intensive and incredibly expensive, so much so that DiD has come to be acknowledged as Expense-in-Depth: the sustaining lifeblood of the Security Industrial Complex.

The modern computing landscape, with its complex array of physical, mobile, cloud and virtual computing, has exponentially grown the attack surface within which our adversaries can strike. In lockstep with that evolution, the cybersecurity industry has prolifically grown DiD security technologies, the revenue of which ironically works to undermine the enthusiasm with which they might otherwise embrace the emergence of prevention options that can more effectively and efficiently grapple with threats now morphing increasingly with less resources than historically required.

Most EDR solutions require the preservation of an Expense-in-Depth organizational structure, with its team of analysts and forensics specialists; a significant investment in on-premises infrastructure and/or streaming of data to the cloud continuously, and the employment of other highly-skilled, evermore scarce security resources. What we have needed is an ability to liberate financial and human resources held prisoner by this arcane approach, a solution couched in business terms and designed to automate threat detection and response tasks using existing or even less resources.

Many CISOs struggle to communicate the business value of such an approach to senior executives and board members, due to a lack of backgrounds in finance and economics. Security practices are no longer a distasteful cost of doing business but now an indispensable and inextricable aspect of advancing it – recognized as integral components of corporate governance and accountability, yet the risk-adjusted costs of security investments are poorly understood. Consequently, an organization’s Total Cost of Controls (TCC) has been allowed to increase rapidly without producing comparable improvements in risk management efficiency.

How can a CISO meet expectations for reducing risks while minimizing costs? How can these costs be accurately measured and assessed within the context of an organization’s overall risk management strategy? A TCC model that makes it possible to measure information security as a business function, balancing risks against costs to maximize value and efficiency is what we’ll explore more fully in next month’s issue.