In a rapidly transforming threat landscape, cyber defense solutions must be both innovative and flexible to harden organizational security against ever-evolving adversarial attacks. While current signature detection techniques effectively combat known attack structures, they are inherently reactive and require significant time to respond to sophisticated attacks. These challenges are compounded by the individualized characteristics of a given network, as each demands a system that understands its unique threats. Cybersecurity experts face the challenge of building flexible solutions that can learn the norms of a given network while rapidly adapting to defend against new attack structures. Generating timely identification of cyber threats hidden within the high volume of data generated by a network is an industry-wide problem that continues to challenge and stress organization’s cybersecurity operations.

To address such complex challenges, many organizations have taken on efforts to implement artificial intelligence (AI)-based solutions into their cyber operations. According to an April 2019 survey by the Consumer Technology Association, AI’s top use in 2018 was in cybersecurity, with 44 percent of all AI applications being used to detect and deter security intrusions. In a July 2019 survey by Capgemini, more than two-thirds of respondents said that they believe AI will be necessary to respond to future cyberattacks given the current threat environment.

Yet, this observed wide demand for AI exceeds most organization’s ability to develop and operationalize AI-based solutions. In fact, a 2018 Gartner study predicted that 85 percent of AI projects will fail to deliver on substantive Return on Investment (ROI), and AI projects related to an organization’s cyber operations are no exception to this trend. While AI-driven models often demonstrate a strong ability to learn from past malicious network activity, these models consistently fail to identify increasingly sophisticated attacks across a rapidly expanding attack surface. Worse, efforts to automate the workflows of security operations centers (SOC) by deploying these models have had an opposite effect, instead drowning SOC operators in a sea of false positives and making it even more difficult for organizations to identify and address malicious network activity.

As such, effectively implementing AI into an organization’s cybersecurity operations requires far more than training an advanced AI algorithm or deploying an automated process to “plug and play” within existing cyber operations. Commoditized defenses will only stop commoditized attackers, not persistent attacks commonly seen with nation-states or other sophisticated adversaries. Organizations with threats this serious must implement AI through an immediate, but well-thought-out, integrated strategy, rather than just adding an additional capability bolted on to existing systems. To do so, a successful approach requires a multi-pronged cybersecurity effort that understands how, when and where AI can effectively enhance, streamline and integrate an organization’s cyber operations.

Based on our experience as an industry provider of cybersecurity and AI solutions, we believe these five steps will help organizations operationalize AI into their cybersecurity technology systems, business practices and mission operations.

 

1. Consider Goals and Risks

For those organizations ready to accept risks that are both known and unknown, AI offers powerful potential for predictive insight, precisely targeted resource allocation and proactive approaches to securing your security posture. It can also help augment security teams, reduce analyst “alert fatigue,” and provide advanced detection capabilities. However, risks must be properly identified and viewed as opportunities, rather than barriers to success or failures.

Prior to implementing AI into cybersecurity operations, organizations should establish expectations, risks and success criteria. According to research firm IDC, one quarter of the companies implementing AI projects report a failure rate of up to 50 percent, most notably citing unreasonable expectations as a source of failure. To ensure expectations are understood from the start, organizations must focus on identifying a defined AI use case and then generate program management processes to direct development efforts. In doing so, organizations can ensure that from their genesis, AI projects are oriented toward clearly articulated business objectives with documented desired outcomes, a clear understanding of required resources and Critical Success Factors to measure the success of implementation.

 

2. Establish the Foundation

AI offers powerful potential for augmenting existing cybersecurity tools beyond traditional signature-based approaches and offers a mechanism for the rapid validation and prioritization of threats. However, understanding the basics of the network are essential for success, specifically in the areas of visibility, governance, storage and processing and workflows.

 

Visibility

First, all assets on the network must be accounted for through an established IT Asset Management Program. Studies more than a decade old show that most organizations cannot account for nearly 30 percent of their assets – a troubling statistic that our experience continues to prove true today. Understanding what is on the network is key to recognizing and responding to cybersecurity incidents, in addition to ensuring AI models are using the right data. Crowdstrike’s 2019 Global Threat Report suggests threats actor’s ability to spread across the network takes between 18 minutes to nine hours. Attempting to track down assets after detection can significantly increase the Mean Time to Remediation.

 

Governance

Next, the best operationalized AI use cases require multiple data feeds, which represent a unique perspective on what is happening on an organization’s network and infrastructure. As with any human operations, AI performs best when many perspectives can be fused together into one comprehensive picture. However, this is often challenging, as each model may be expecting data in a unique structure and format. For this reason, it’s critical that organizations stand up a common data model (e.g. the Splunk Common Information Model (CIM) or the Elastic Common Schema (ECS)). This model can be used to link multiple data feeds together into a single source of data truth and ensures each algorithm in an organization’s model suite is built on the same data foundation.

 

Storage and Processing

Once the data is standardized, the use of a data broker (e.g., Kafka, RabbitMQ) can help move data outside of existing security platforms to where advanced analytic capabilities can take place. By decoupling the storage and compute layers, resource intensive AI models can run more freely without bogging down the real-time identification of threats. This will also prohibit vendor lock-in should organizations change products at a later time. These separate systems also support the storage of tagged flat files more suitable for AI use cases where currently deployed tools don’t support a similar extensible storage method.

 

Workflows

Last, organizations must establish clearly defined and organized workflows and processes that extend beyond the security team. In a 2019 Ponemon Institute study, only 23 percent of organizations out of 3,665 said their company had an incident response plan applied consistently across the entire enterprise. Alternatively, in the same study, 24 percent of organizations admitted to not having an incident response plan in place at all. As new threats are detected, organizations need a solid grasp on their incident response processes to effectively address threats. If the number of alerts begins to rise after new detection methods, analysts can become quickly overwhelmed,  which in turn poses issues to the success of an organization’s AI deployment.

Through a similar approach, organizations will be more effectively prepared to validate, prioritize and analyze potential threats. With the basics covered, launching AI across your organization is just a few more steps away.

 

3. Understand the Human Element

AI complements human effort by supporting analysts in reducing errors, accelerating analysis and automating labor-intensive tasks. But the human element of AI also poses a variety of challenges organizations must grapple with before launching an AI cybersecurity initiative. Namely, will the organization be able to sufficiently staff the initiative to generate an ROI?

An affirmative answer is not a given. For starters, working with AI requires an unusual blend of skills. Furthermore, a substantial gap exists between the demand for trained cybersecurity workers and the supply of trained applicants. More than half of organizations have failed to begin or further their AI implementation efforts due to the lack of sufficiently trained staff, according to Gartner.

Outsourcing is one solution for “minding the gap,” but puts an organization at risk of losing intellectual property and institutional knowledge when the contract ends. In-house training can be expensive and requires competencies beyond the reach of many organizations.

To mobilize, manage and maximize the human element of AI in cybersecurity, ask:

  • How does your organization determine which tasks to automate and which to keep in human hands?
  • How does your organization plan to evolve certain cyber roles, such as testing and evaluation, tier 1 security operations center, systems administration and infrastructure support?
  • How will emerging roles be introduced into the enterprise? These might include cyber data scientists, employees who maintain machine learning (ML)models or train, conduct outreach, or support the integration of ML into operations.
  • What education does your organization have in place to educate employees overall about AI (e.g., online tutorials, webinars, podcasts, hack-a-thons)?

 

4. Focus on Use Cases

One big question that emerges with any new technology initiative, including and beyond cybersecurity and AI, is: what areas can organizations make more efficient, and what is the ROI? Another big question for those considering AI in cybersecurity: where do I start? To reduce risk and increase the success of your AI implementation, we believe organizations should focus on implementing AI-based use cases as a first step in any broader AI adoption initiative. In cybersecurity, there are no shortage of good places to start. Pairing a compelling technical problem with an organization’s particular network features and strategic aims is therefore a fruitful paradigm for generating and evaluating potential use cases.

AI is difficult to implement at a broad level. Common use cases, such as anomaly detection, can fail, which may be viewed as a failure in implementation. Instead, the application of AI use cases should focus on two primary methods in alignment with the organizational security strategy. First, decomposing the security analyst’s workflow to better understand where the need is, and second, accounting for all monitored and unmonitored data sources. In the earlier mentioned Poneman study, more than 50 percent of organizations focused on security resiliency measures of success around preventing cybersecurity incidents and the Mean Time to Identification, while also using between 51-100 security tools. Not only can AI offer support to inundated security operators expected to act after piecing together data from multiple sources and tools, but it can also provide a huge lift in detection, mitigation and the overall security posture through monitoring previously unmonitored data. For example, deploying AI models to look for Domain Generated Algorithms (DGA) associated with unidentified malware in Domain Name Systems (DNS) logs or malicious command line executions typically not monitored in real time.

Adhering to this step may require a significant amount of upfront investment for organizations. However, once they have been settled, the move to automation will increase the speed of detection and response.

 

5. Automate and Orchestrate for Quick ROI

Once the first four steps have been taken and organizations better understand how AI can increase the ROI across strategic goals, the next step is to automate processes and allow analysts to refocus their efforts. Nearly 75 percent of organizations understand the benefits of automation, but almost a third of organizations have failed to implement resource saving automation initiatives. Areas such as threat intelligence collection, vulnerability scoring, phishing email header analysis and traffic patterns are prime areas to begin both AI use cases and automation for a quick ROI.

Automation projects run the gamut in terms of scope and complexity. For an organization with a low risk tolerance, automation of simple processes can be executed in a short amount of time and free up staff time, thereby creating efficiencies and savings. Organizations with greater AI talent, resources and maturity by contrast might consider developing an autonomous system for contextual reasoning, to yield insights into the motivations behind threats and actors.

 

Conclusion

In the current cybersecurity environment, adversaries are employing increasingly sophisticated algorithms and diversified methods, blacklists, rules- and behavior-based cyber operations. Traditional, reactive measures are no longer enough. Organizations need to quickly identify where intrusions occurred, the likely attack vectors moving forward and how to quickly remediate exploited vulnerabilities – all in a shortened window of response time.

With its ability to introduce workflow automation, behavior and streaming analytics, active monitoring, intelligent prediction and advanced network threat detection, AI can help. Yet as with any new technology, AI is far from a plug-and-play or one-size-fits-all proposition.

Fortunately, with a solid foundation, the right talent and a strategic approach, organizations can avoid adoption pitfalls. In fact, successfully executed AI cybersecurity initiatives promise the potential to rapidly comb through large volumes of network data, accelerate analysis and unearth previously unseen threats and connections.