While the concept of Zero Trust was created 10 years ago, the events of 2020 have thrust it to the top of enterprise security agendas. The COVID-19 pandemic has driven mass remote working, which means that organizations’ traditional perimeter-based security models have been broken up, in many cases literally overnight.  In this new normal of remote working, an organization's network is no longer a single thing in one location: it is everywhere, all of the time. Even if we look at organizations that use a single data center located in one place, this data center is accessed by multiple users on multiple devices.

With the sprawling, dynamic nature of today’s networks, if you don’t adopt a Zero-Trust approach, then a breach in one part of the network could quickly cripple your organization as malware, and especially ransomware, makes it way unhindered throughout the network. We have seen multiple examples of ransomware attacks in recent years: organizations  spanning all sectors, from hospitals, to local government and major corporations, have all suffered large-scale outages. Put simply, few could argue that a purely perimeter-based security model makes sense anymore.

Five Practical Steps

So how should organizations go about applying the Zero Trust blueprint to address their new and complex network reality? These five steps represent the most logical way to achieve Zero-Trust networking, by finding out what data is of value, where that data is going and how it's being used. The only way to do this successfully is with automation and orchestration.

  1. Identifying and segmenting data

This is one of the most complicated areas of implementing Zero-Trust, since it requires organizations to figure out what data is sensitive.

Businesses that operate in highly regulated environments probably already know what that data is, since the regulators have been requiring oversight of such data. Another approach is to separate systems that humans have access to from other parts of the environment, for example parts of the network that can be connected to by smartphones, laptops or desktops. Unfortunately, humans are often the weakest link and the first source of a breach, so it makes sense to separate these types of network segments from servers in the data center. Naturally, all home-user connections into the organization need to be terminated in a segregated network segment.

  1. Mapping the traffic flows of your sensitive data and associate them to your business applications

Once you’ve identified your sensitive data, the next step is knowing where the data is going, what it is being used for and what it is doing. Data flows across your networks. Systems and users access it all the time, via many business applications. If you don’t know this information about your data, you can’t effectively defend it.

Automated discovery tools can help you to understand the intent of your data - why is that flow there? What is the purpose? What data is it transferring? What application is a particular flow serving? With the right tooling, you can start to grow your understanding of which flows need to be allowed. Once you have that, you can then get to the Zero-Trust part of saying “and everything else will not be allowed.”

  1. Architecting the network

Once you know what flows should be allowed (and then everything else deserves to be blocked), you can move onto designing a network architecture, and a filtering policy that enforces your network’s micro-perimeters. In other words, architecting the controls to make sure that only legitimate flows are allowed.

Current virtualization technologies allow you to architect such networks much more easily than in the past. Software-defined networking (SDN) platforms within data centers and public-cloud providers all allow you to deploy filters within the network fabric – so placing the filtering policies anywhere in your networks is technically possible. However, actually defining the content of these filtering policies: the rules governing the allowed flows – is where the automatic discovery really pays off

After going through the discovery process, you are able to understand the intent of the flows and can place boundaries between the different zones and segments. This is a balancing act between how much control you want to achieve and how secure you want to be. With many tiny islands of connectivity or micro-segments, you have to think about how much time you want to invest in setting that up and managing it over time. Discovering intent is a way to make this simple because it helps you decide where to logically put these segments.

  1. Monitoring

Once the microsegments and policies are deployed, it’s essential to monitor everything. This is where visibility comes into its own. The only way to know if there is a problem is by monitoring traffic across the entire infrastructure, all the time.

There are two important facets to monitoring. Firstly, you need continuous compliance. You don’t want to be in a situation where you only check you are compliant when the auditors drop in. This means that you need to be monitoring configurations and traffic all the time, and when the auditor does come, you can just show them the latest report.

Secondly, organizations have to make the distinction between the learning phase of monitoring, and the enforcement stage. In the discovery’s learning phase, you are monitoring the network to learn all the flows that are there and to annotate these with their intent. This allows you to see what flows are necessary before writing the policy rules. There comes a point, however, where you have to stop learning, and decide that any flow that you haven’t seen is an anomaly which you will block by default. This is where you can make the big switch from a default ‘allow’ policy to a default ‘deny,’ or organizational ‘D-Day.’

At this stage, you can switch to monitoring for enforcing purposes. From then on, any developer who wants to allow another flow through the data center will have to file a change request and get permission to have that connectivity allowed.

  1. Automate and orchestrate

Finally, the only way you will ever get to D-day is with the help of a policy engine, the central ‘brain’ behind your whole network policy. Without this, you have to do everything manually across the entire infrastructure every time there is a need for a change.

Your policy engine, enabled by automation orchestration, is able to compare any change request against what you have defined as your legitimate business connectivity requirements. If the additional connectivity being requested is in line with what is defined as acceptable use, then it should move ahead with Zero-Touch, in a fully automated manner. This can be achieved with deployment of necessary updates to the filters in minutes. Only requests that fall outside the guidelines of acceptable use need to be reviewed and approved by human experts.

Once approved (automatically or after review), a change needs to be deployed. If you have to deploy a change to potentially hundreds of different enforcement points, using all kinds of different technologies, each with their own intricacies and configurations, this change request process is almost impossible to do without an intelligent automation system.

Focus on business outcomes, rather than security outcomes

Removing the complexity of security enables real business outcomes, since processes become faster and more flexible without compromising security or compliance. Right now in many organizations, even with the limited segmentation that they have in place already, pushing through a change post ‘D-Day’ is very slow – sometimes taking weeks to get through the approval stage on the security side because there is a lot of manual work involved. Micro-segmentation can make this even more complex.

However, using the steps I’ve outlined here to automate Zero Trust practices means that the end-to-end time from making a change request to deployment and enforcement goes down to one day, or even a few hours – without introducing risk.  Put simply, automation means organizations spend less time and budget on dealing with managing their security infrastructure, and more on enabling the business.  That’s a true win-win situation.