Encryption is a double-edged sword. It allows us, as users, to gain better privacy which enables us to keep the things we do, the identities of whom we do it with — to ourselves. At the same time, it allows attackers to remain stealthy and hidden from our detection mechanisms.

The Privacy vs. Responsibility Problem

Some organizations mitigate that by getting all their employees to sign a legal document allowing their cybersecurity team to decrypt all their internet activity (some limit this by intentionally excluding certain common medical or financial websites). This approach suffers from two main flaws. 

First and foremost, it can only be applied to some types of encrypted connections but not others, as explained later, so it doesn’t allow the defenders to see into all encrypted connections. More importantly, such organizations deliberately and intentionally harm the privacy of their employees in the name of improving the organization’s cybersecurity posture.

Secondly, decrypting such traffic is technologically complex and forces the organization to accept responsibility for the decrypted data. These reasons cause organizations to either not inspect it thoroughly or at all. Organizations are neglecting their responsibility for the safety of their data and their organization’s business continuity, and by extension — the financial future of all of their employees.

However, there is a third way. For that, we’ll need to dive into some details on two of the most widely used encryption protocols and how they work.

TLS and SSL and why they’re important

TLS and SSL operate at the transport layer of TCP/IP and are often used to secure protocols such as HTTP (by using HTTPS), FTP (by using FTPS), and more. This protocol allows network administrators to configure their network so that their routing device can function as a “Man-In-The-Middle.” This allows the routing device to decrypt and inspect the traffic before letting it proceed as intended.

Another critical development in TLS is PFS, or “Perfect Forward Secrecy.” This is a TLS feature that, if used, will replace the encryption keys used for the connection during the connection by using the Diffie-Hellman algorithm. This will prevent traffic listeners, such as passive traffic analyzers or eavesdroppers from being able to decrypt the traffic even if they have access to the server’s private key.

But it’s more difficult to Man-In-The-Middle SSH

SSH, or Secure Shell, was developed to manage servers by sending commands and receiving their output via a secure channel. It also allows the use of a tunneling mechanism for other protocols. Unlike TLS, it doesn’t allow network administrators to configure their network so their routing device can function as a “Man-In-The-Middle.” That means that if network administrators allow outgoing SSH traffic from their organization to the internet, their users can tunnel through SSH.

Metadata and context instead of decryption

So, now that we know how these protocols work, I’d like to suggest a different approach to handling encrypted communications: It is true, of course, that the malicious payload is encrypted. However, we can look at the encrypted data and fully understand if it is malicious or not by analyzing valuable metadata and specific protocol properties as well as relying on anomaly detection.

How Metadata analysis can be used

SSH uses smaller packets for login attempts than those used to send commands, receive output, or transmit files. That fact allows tools to automatically detect brute force attempts and password guessing in SSH even without decrypting the traffic. Another example includes the number of TLS certificate issuers used in connections, especially from servers, which is relatively small. So a TLS connection using a new certificate issuer that hasn’t been seen in the past month in that organization is probably a good indicator of something wrong. 

A third example, most often (especially on Linux), a TLS connection will usually start with a DNS query to the domain name used, especially on Linux. This, in many cases, will be enough to detect DNS anomalies, such as the number of unique DNS queries to the same parent domain or the DNS connection duration. They are very good indications of malicious activity if they are abnormal based on the baseline in that particular environment. 

Final Thoughts

All of the above and much more can be done without decrypting the traffic, violating the employees’ right to privacy, and still allowing cybersecurity defenders an excellent chance to catch the potential adversary.