Security departments have traditionally been treated as a cost center to the business, pressured to justify their budgets and constantly rationalize spending. This has put existing solutions and new purchases under the microscope, with mounting pressure from leadership to explain their necessity and demonstrate their value to the organization. In the past, measuring the impact of a given security solution was often difficult — but today’s businesses have access to new resources that can help gauge their performance in more meaningful ways. New methods of looking at exposure management and security control validation now enable businesses to test, discover, understand and prioritize security investments with business and risk context.

Amid the growing interconnectivity of the modern world, introducing new initiatives often demands cybersecurity scoping and risk analysis. This involves understanding the potential impact on the business’s risk and resilience, which may set new baselines and requirements for proving that the company will remain secure — or at least within acceptable risk tolerance levels. To do this, security teams must demonstrate the effectiveness of their current tools, justify why a new solution is needed, and lay out what will happen without this new investment. While protection metrics like “risk scores” are now abundant, they are often in conflict from tool to tool. Without the ability to aggregate and contextualize information or translate security risks in a business context, technical teams are unable to clearly articulate value or how they will measure success.

Why is security ROI difficult to quantify? 

The severity of a breach is often assessed in dollars and cents based on loss of revenue, cost of recovery efforts, and fines and other regulatory expenses — so it can be difficult to quantify a breach that didn’t happen. Without the event itself, the amount of downtime (loss of revenue), the severity of the attack (cost of recovery) and the exposure to regulatory oversight action (fines and other costs) cannot be cleanly defined. This puts security teams in a tough spot: the cost of the prospective security solution is known, but the amount of resources that solution will ultimately save the company from having to spend is unknown.

The most common metrics for measuring security effectiveness are mean time to detection (MTTD) and mean time to remediation (MTTR), but these are based on the response to a threat action and gauge the performance of the security system as a whole, rather than any individual tool, program, or process. In practice, only solutions like breach and attack simulation (BAS) or red teaming and penetration testing can isolate the efficacy of an individual platform or tool, which then can be converted to demonstrable value.

This presents a problem from a business perspective, where other departments quantify ROI and risk exposure based on hard numbers — whether those numbers are positive or negative, everything is measured. Consequently, business leaders are pressuring security teams to quantify expenditures similarly, enabling informed budgetary decisions grounded in exposure and risk metrics —either on the basis of past performance or future extrapolation.

The value of exposure management and security validation 

Exposure management and security validation have emerged as valuable complementary tools for measuring the effectiveness of individual solutions. They enable organizations to visualize how their controls are performing and allow them to create better security benchmarks that can be measured and assessed over time. Exposure management builds on this information to merge it with business contexts, not just showing where controls are strong or weak, but showing which areas of the business those strengths and weaknesses impact. Combined, they can show where risky attack paths exist, whether the compensating controls are alerting, detecting and controlling or containing attackers effectively, and how this can impact the business’s operations.

This is a critical point. If — for example — a vulnerability exists on a legacy system that cannot be patched or removed due to impact on revenue, it is important to validate whether there is a compensating control in place. If a compensating control prevents the adversary from advancing their attack, then addressing the vulnerability may not be a high priority. Conversely, if the current suite of available tools and technologies cannot create a compensating control, then the business is exposed to significant risk and must decide if that risk is sufficient to mandate a change to the business process. That could mean removing the legacy tool or platform altogether, or authorizing budget and resources to obtain new security controls.  

The ProxyNotShell vulnerability, which impacted older Microsoft Exchange servers, is a good example. Many organizations rely on Microsoft Exchange components, like Public Folders and Custom Forms, that are not in later versions or in Office365. Fortunately, impacted organizations could work around the issue by implementing compensating controls to stop attackers from exploiting vulnerabilities. A business relying on traditional vulnerability scanning might still consider this “vulnerable” (as the vulnerabilities themselves remain unpatched), but security validation demonstrates that the attacker cannot successfully execute an attack. The combination of exposure management and security validation lets businesses better understand which vulnerabilities can actively hurt them, which can be compensated for with current tools and methods, and which cannot be compensated for without budget, downtime, or other business impact – and then prioritize remediation actions accordingly.

Using information in context to drive continuous improvement

Combining exposure management and security validation can also provide security teams with a roadmap for further improvement, prioritized according to both the threat landscape and the business's operational needs. By providing context for performance numbers, these solutions more plainly illustrate whether security tools are functioning as they should be. With that information, security teams can then make tweaks and adjustments to try to improve that functionality and quantify those changes to justify any impact on the business and its processes. They can also track and trend that data to measure incremental improvements over time — especially important to establish ROI where budget expenditures were used to close gaps.

While some security teams may chafe at the perceived attempt to quantify their performance, this can often work in their favor. The ability to rationalize spending is powerful — if, for example, budget cuts are looming, having hard numbers to point to makes it much easier to justify why eliminating certain solutions or processes would leave the organization dangerously vulnerable. This information also helps justify headcount and why it needs to be retained. On the other hand, if the budget is increasing, it also makes it easier to demonstrate where it might make sense to invest in additional tools, platforms, and staff to close security gaps in the organization’s risk profile. The data may even reveal that certain compensating controls are working well enough that another solution has become redundant — leading to more budget for other areas and even reducing the workload on team members, preventing burnout.  To be clear, it is always important to maintain (or even expand) team size, but retaining current team members by ensuring they do not become overburdened and leave should be a priority.

This allows organizations to granularly see where opportunities for improvement lie or where cost savings can be achieved without creating additional risk. It also enables security teams to better identify — and rationalize — which solutions are needed, and which are not, eliminating bloat and freeing up budget for other, more critical areas. In doing so, they can demonstrate both solid defenses and fiscal responsibility to senior leadership, the board and investors.  

Framing security in business terms

Security validation elevates risk analysis, enabling more granular examination of the value of individual tools. This improved information can help business leaders visualize the complex impact of individual solutions, demonstrating their value over time and identifying both economies and needs in terms of spending. Exposure management incorporates the business perspective into the technology arena — defining the levels of exposure to risk for the different lines of business and processes that support them. Together, these methodologies define the current risk situation in other areas of the organization, how this risk exposure can impact the business, and how effective defensive operations are — both currently and over time. They offer a holistic view of security, enabling organizations to deploy solutions judiciously and effectively against existing and emerging threats, bolstering confidence among stakeholders.