For the past many years, the focus in cybersecurity has been on collecting data. Data is a powerful tool and critical to finding the patterns, insights or behavioral signals that might signal a vulnerability in the organization’s protection stance or even an attack underway. 


Yet, those benefits have led us to a new problem: we are now drowning in data. There are mountains of log data accumulating across the enterprise, pulled from open systems, cloud-native endpoints, security and compliance feeds, interconnected applications, servers, Internet of Things (IoT) data and more. The result is accumulating cost and overwhelmed engineers and analysts that must deal with false positive alerts and unending actions to investigate. 


This is much greater than a simple data headache. Instead, the overload of data is actually introducing a new type of risk to organizations and forcing them to make difficult choices, often resulting in less security not more.


Nine in ten security leaders indicated in a recent Harris poll that they rely on log data to flag potential attacks and other tasks. Yet, more than half of organizations (57%) said they are forced to limit the number of logs they ingest or store because it is simply too expensive to store them all. As a result, 63% said they sometimes don’t have the logs they need to troubleshoot or debug systems, and 82% of senior leaders said incident response efforts are hindered. 


These obstacles standing between defenders and their ability to protect their organization sit starkly against the current threat landscape, which exhibited record numbers of attacks in 2021 and shows no signs of slowing down in 2022. According to Cybersecurity Ventures, cyberattacks were expected to cost organizations $6 trillion by 2021, up from $3 trillion in 2015. This is an astronomical level of new risk to organizations in every industry. 


Organizations need to be able to drive more value from the data they have on hand, particularly when it comes to observability data. Observability has increased in popularity in recent years, bringing together monitoring tasks with additional context on potential issues and why they might be occurring. By increasing their data observability capabilities, organizations can improve performance, threat detection, incident response and other key processes. 


In addition to volume, there are other challenges facing organizations. These include organizational and data silos, vendor lock-in and a multitude of tools that don’t meet the needs of multiple data consumers. Further challenges arise in the DevOps and DevSecOps worlds, where processes are accelerated due to autonomy and create new requirements around data access.   


There are some tools on the market to address these challenges. However, according to the same survey, 66% said those tools are not easy to use, 67% said the tools make it difficult to collaborate across teams and 58% said they struggle to route security events towards resolution. This means the data problem is not yet solved, despite the fact that two-thirds of organizations are spending more than $100,000 and one-third are spending $300,000 a year or more on these tools. 


Organizations need to consider ways to derive more value from their logs and gain more insights from the ones they do collect and store. To stay compliant and secure, dropping logs is not an option. Instead, organizations should consider solutions that can help structure the data and route it to the appropriate location for specific use cases to help manage cost and get more value from their data across the entire organization. 


In cybersecurity, we are lucky to gain increasing new data and insights from across the organization every day. By pairing that data with actionability, cybersecurity leaders stand a chance to better mitigate, detect and respond to the attacks they will inevitably face this year and into the future.