Decision Automation in Security Operations Brings Transparency and Trust to AI
Like many other industry buzzwords, there’s a lot of hype around security automation. Yet, for the first line of defense in an enterprise environment, the analysts working in the security operations center (SOC), the notion of automation is more headline than reality. Many basic tasks – logging, fault isolation, reporting, and incident troubleshooting – are still very much manual.
Often monitoring up to tens of thousands of alerts an hour, it is a tough problem that presents a constant battle of humans versus events and that’s why the right automation is so important. Unfortunately, the automation tools available today only scratch the surface of addressing the biggest challenges that security teams face.
Decision Automation Lays Groundwork for AI at Scale
Decision automation is a different class of security automation. It emulates human reasoning and decision-making skills, within the context of the environment, to reduce the high volume of noise, which usually produces many false positives. With decision automation, data can be analyzed at machine speed and enterprise-scale, reducing the probability of threats going undetected or not being remedied. This allows SOC teams to keep their focus on threat hunting and other valuable tasks.
Automating level one security monitoring is inevitable. The moment the industry can coalesce and agree that this is inevitable, the sooner we can solve the automation challenge and get it done, right? Security, orchestration, automation and response (SOAR) is a category of products that are part of the new SOC, but they are not the complete answer.
As the industry moves to automate, different methods, like artificial intelligence (AI) start to make a tangible impact in the new SOC.
Machines are Your Colleague, Not Your Competition
One of the biggest underlying issues when it comes to distrust of AI is the fear that AI solutions will take jobs away. But in cybersecurity, that’s not what’s happening. Security analysts are trying to find threats to the network, and now they have an ally in that goal – a machine that they teach and that in turn teaches them. An AI cybersecurity tool is like a coworker, not a replacement. It is both doing the redundant tasks that waste people’s time and the in-depth, big data analysis that people cannot do. This enables SOC teams to redirect their skills and be more effective in accomplishing their goal.
When it comes to achieving transparency and trust for AI, decision automation for cybersecurity is a way to accomplish this. Decision automation software automates the monitoring and triage process, rather than just providing information to people who then make the choices. The software’s decisions are based on preprogrammed business rules, making the basis for its decisions transparent.
Trust is Achieved Through Results
In the case of cybersecurity, organizations need to trust the results their security tools deliver. Decision automation helps ensure this. Everyone is clear from the start what the AI solution is basing its decisions on because the provider and the organization have agreed in advance on which decisions will be automated and why. This goes a long way to clearing up many of the unknowns. It’s similar to the process of hiring a new employee in some ways – in that, you can do as much due diligence as possible during the interview process but you still don't really know if you can trust that person until you've had a chance to work with them and see the results that they produce. It’s the same for AI: you have to work with it just like you would a fellow human and see the results that it produces.
Once an AI-based solution is working and can match or outperform a human, it then has better potential to be widely adopted. For instance, consider self-driving cars. Many people remain reluctant to use this mode of transportation until it’s proven that autonomous vehicles are at least as safe as those driven by people. Once that happens, greater adoption should rapidly follow. The capability has to match or exceed what a human can do. That level of performance engenders trust.
Transparency also engenders trust, but AI is not inherently transparent. It is susceptible to “black box syndrome,” where an AI-based tool offers results without any explanation of how it arrived at those results. When the decision-making criteria aren’t clear, the results could be perceived as wrong, incomplete or improperly aligned with the organization’s goals. And if the tool keeps generating false-positive anomalies all day, trust isn’t being built.
What’s more, it is quite useful and practical to understand how various attacks work and why some attacks get escalated, but others don’t. This improves an organization’s ability to remediate the attack. An AI-based cybersecurity solution can provide investigative evidence that the organization would not otherwise have access to. That’s in sharp contrast to AI tools based on unsupervised machine learning that are just generating anomaly-based detections. These tend to result in a lot of false positives, which wastes time and doesn’t inspire confidence and trust.
It’s been said that business moves at the speed of trust. This holds true for IT security, as organizations that can’t trust the data their AI tools are giving them will end up with a much slower response time to real and possible threats. Transparency builds trust, but AI tools historically have not done a great job of being transparent with the “why” aspect of their results. Decision automation fixes this issue because the organization determines in advance which decisions to automate. This promotes an atmosphere of trust in which IT security teams can rely on the information they receive, act on it more quickly and with greater insights, and know that the AI tool is a coworker rather than a replacement.