Metrics that Matter: How to Measure the Effectiveness of Corporate Security Programs
What is the point of spending time, resources and money on your security program if you can’t tell whether it’s working or not? It’s just as important to establish the right metrics for a security program as it is to have such a program in the first place. We often say “not everything that gets measured matters, but what matters absolutely should get measured,” and that is just as true for security as any other critical business function. So how should organizations go about measuring the effectiveness of their security program?
We recommend a three-stage approach (akin to “crawl / walk / run”) based on your organization’s security program maturity.
Three Stages of Measurement
Stage One: Foundational – Measuring Capabilities and Maturity
For enterprises that are just starting to get a handle on their security program, it’s likely that they simply don’t know what they don’t know. At this stage, the most appropriate place to start is a framework-based assessment that validates controls and assesses and reports on the existence of foundational capabilities and maturity of their program.
This assessment should be tailored to the company’s unique security program, focused on specific business needs and executed with the explicit goal of increasing security’s impact on business success. Toward that end, program and capability assessments include:
- A baseline of the organization threat profile and an understanding of the scenarios that pose the greatest impact to the business;
- Validation of basic capabilities that all security programs should have and an analysis of how the program addresses key risks;
- Measuring how well a program and individual capabilities perform today;
- Understanding where industry peers and best-of-class organizations are to inform decision making related to current gaps and future spend; and
- Crafting an action plan for addressing key gaps and optimizing existing capabilities through a prioritized transformation roadmap.
The results of this effort will enable organizations to understand all the elements of the program and their maturity, with a roadmap for improvement.
Stage Two: Intermediate – Measuring ROI
Companies that have already begun investing in an enhanced security program through a focused transformation will often want to understand if the program strategy is still correct and if they are obtaining the expected ROI.
At this stage, we want to define key performance metrics to measure the maturity of capabilities and their ability to deliver ROI. Metrics should be related to what is meant to be improved. For example:
- If you can’t detect threats because of a lack of endpoint visibility, then metrics must relate to the initiative to improve endpoint visibility and should show continued increase in endpoint visibility.
- If there’s low security awareness, metrics must measure the training program, showing a material increase in test scores.
- If the organization’s threat assessment identified certain attack vectors to be more likely, metrics should show an increase level of detection capability along the specific attack vector’s path.
If the metrics are stagnant, the company should determine what’s not working. Assuming the new capabilities were implemented correctly, common causes of why this stagnation occurs include lack of adequate staffing proportional with the amount of additional technology or controls added; an increase in the attack surface due to new digital footprint, acquisitions or business changes; or a lack of process updates to reflect the updated technical controls so the organization is still executing the old way or is inconsistent.
Stage Three: Advanced – Measuring Readiness to Respond
Enterprises with more mature security programs have gone beyond measuring capabilities and ROI to asking the ultimate question: do they have the right mix of people, process and technology in place, measured by their readiness to respond? The best way to discover the answer is to conduct adversary/scenario-based testing. This type of testing is beneficial at all stages, but more mature organizations will see the most value from this type of investment because the results will highlight hard-to-find weaknesses in the security program. From a cybersecurity perspective, this is typically accomplished by:
- Red Teaming: A technical assessment in which a scenario or target is agreed upon and the company defends their environment from a simulated adversary in a controlled setting with established rules of engagement. The purpose of Red Teams are to test the detection and response capabilities of a company.
- Compromise Assessment or Hunt: A technical assessment in which the assessor is searching the company’s network for indications that there is an active attacker within the environment or that a potential attack vector is possible. The purpose of this assessment is to augment the detection capabilities of a company.
- Wargaming: A simulation that tests a company’s organizational readiness to respond to a cyberattack in a coordinated way across the enterprise. The purpose of this type of a simulation is to test the response, communication and escalation processes during an active incident.
The metrics that are most valuable here quantify such things as mean time to detection, dwell time and an organization’s ability to emulate the threat landscape to stress test the security operations teams.
The metrics that matter should relate to the business goals of your organization, the threats that are most relevant and the maturity stage of the program.
Defining metrics and assessing results requires incorporating an outcomes-focused approach across the entire enterprise. Further, building comprehensive program strategies, detection mechanisms, and response capabilities can benefit from cross-industry perspectives. Core to this response is an organizational commitment to regularly communicate and objectively evaluate cyber preparedness.