“Insider data breaches” are an increasingly hot topic in both business and cybersecurity circles. Despite the Hollywood-crafted image of the malicious, hoodie-wearing hacker sitting alone before a code-filled screen, the majority of real-life breaches are actually caused by insiders—everyday employees within the organization. In fact, a recent study conducted by my company revealed that more than 70 percent of organizations have suffered internal breaches within the past five years, and nearly half list accidental internal breaches among their top three security concerns.

Although the term “insider data breach” might conjure the image of a disgruntled employee with an axe to grind, not every insider threat is malicious. In fact, it’s fair to say that most insider-driven breaches come as the result of a simple mistake made by an employee during the normal course of his or her day. Maybe an email containing confidential information was sent to the wrong person. Perhaps a file was accidentally left unencrypted. Or maybe an employee simply fell victim to a well-crafted spear phishing or Business Email Compromise (BEC) attack. These mistakes are more common than you might think: as many as 44 percent of employees admit that they have accidentally exposed personal or business-sensitive information through their corporate email.

Cybersecurity professionals have long been forced to grapple with the fact that humans represent an unpredictable variable in their security calculations. After all, even the best static technologies like email gateways and AV filters can’t mitigate every risk, such a misdirected emails and attachments, or employees responding to spear phishing emails. But the advent of artificial intelligence and—more specifically—contextual machine learning technology have placed a valuable new tool in the hands of defenders. Cybersecurity solutions capable of learning about, adapting to, and eventually predicting human behavior have enabled security teams to add a new layer of protection: human layer security.


Understanding What Causes Accidental Breaches

When discussing breaches, it’s important to remember that, while a relatively small number of malicious actors are working against their employers’ best interests, most employees just want to do a good job. Scammers skilled in the art of social engineering understand this, which has given rise to many different spear phishing and BEC tactics. They will often use a compromised email address from a legitimate organization, sometimes even posing as someone in authority, and target individuals with financial or data security responsibilities. Despite the rise of file sharing and other potentially vulnerable services, our research has shown that both corporate and personal email accounts remain the primary cause of accidental data leaks.

Humans are creatures of habit. If your boss asking you to pay an invoice is an everyday occurrence, chances are you’re not going to call them on the phone to confirm every time—you’re just going to pay it. Particularly savvy scammers might not even need a compromised email address to make this scam work—a spoofed email, maybe one letter removed from the real address, can accomplish the same task if the target fails to double-check the “from” field. Catching the right employee on the right day can be lucrative for scammers. And the unfortunate fact is that scammers only need to succeed once to receive a payout. To prevent it, defenders need to be right every time—something that’s impossible to achieve with static technology and unpredictable employees.

There are several underlying problems that contribute to the lack of awareness behind these breaches, but incident underreporting and lack of training are chief among them. In general, employees do not want to draw negative attention to themselves. This is understandable, but it naturally leads to problems like the underreporting of suspicious behavior or failure to report a potential incident of compromise. It’s human nature. Who among us wants to admit that they sent confidential information to the wrong email address, or were lured in by a persuasive scammer? If there’s one thing that humans are very good at, it’s convincing ourselves that our negative actions do not have significant consequences.

Unfortunately, though, they frequently do. Even the most benign-seeming slip-up can have serious, real-world ramifications, but many employees remain startingly unaware of the fact that the wrong information in the wrong hands can lead to serious, company-wide breaches—and that those breaches can bring significant financial, operational and reputational harm. This is where education becomes a critical component of cybersecurity. It’s easy to write off a misdirected email as a silly mistake, but if you have been trained to recognize the potential fallout from such an incident, you might be more likely to report it to the security team, who can begin the process of mitigating the potential damage.


Addressing the Accidental Insider Problem

Even companies like Facebook and Google, presumed to be on the leading edge of technological awareness and skill, are not immune to accidental breaches. Just last year, a lone individual in Lithuania pled guilty to using BEC scams to steal over $123 million from the two tech giants. The man posed as a vendor, sending the companies false invoices that were all-too-often approved by well-meaning but unobservant employees. If even these companies, with their tech-savvy workforces, can’t stamp out accidental insider threats, it seems clear that education and training are not the only answers.

Fortunately, while understanding human behavior is a task best left to sociologists, today’s technology has made predicting it quite a bit easier. Tools equipped with contextual machine learning capabilities can be trained on corporate email accounts, monitoring emails and learning what constitutes normal and abnormal behavior. These tools can learn to identify who should be emailing what, and to whom, and to flag deviations from those norms to those individuals in real time, enabling them to fix their mistake before they even make it. Or prevent an email from leaving the organization altogether. An employee who might not self-report about a misdirected email can instead avoid the problem entirely when alerted to the mistake they are about to make. And what could be more valuable than a security system that can stop breaches before they happen?

Predictive analytics tools don’t just improve security for the sake of the organization. They also help employees feel safer and more comfortable by providing them with the opportunity to self-correct, rather than force them to decide between company security and their own reputation.


Fixing Mistakes Before They Happen is the Future

As Alexander Pope famously said: “To err is human.” Indeed, human error is something that cybersecurity professionals have been grappling with for decades. Until recently, organizations had little choice but to expect those common mistakes, building potential losses into their bottom line. And while the problem of human error is unlikely to ever be solved in its entirety, today’s human-layer security tools have offered security professionals a valuable new weapon to bring to the fight.

Pope’s words are generally followed by, “to forgive is divine.” And certainly, we need a culture of greater openness and understanding for unintentional incidents. But, when it comes to insider data breaches, more important than forgiveness is prevention. By using machine learning tools to help employees understand and identify when they may be engaging in risky behavior, businesses can educate employees about the consequences of their actions while preventing some of today’s most common—and costly—breaches.