Even before the COVID-19 pandemic began, the lines between work lives and personal lives were blurred. The policy of “Bring Your Own Device” (BYOD) to work has enabled greater flexibility across the physical and digital worlds, but it has also challenged our ability to balance organizational security with employee privacy.

One of the biggest threats to enterprise security today involves phishing attacks, in which hackers trick employees into clicking on bad links, opening corrupted files to download data or steal credentials. That risk has only intensified as workers switch back and forth between corporate communication channels and social media accounts on their personal mobile devices every day.

The bad actors can easily detect when employees are active in business emails, Slack, Microsoft Teams, Zoom, or any number of other collaboration and messaging platforms for business. They also know when someone switches over to more personal communication channels such as WhatsApp, Signal, LinkedIn, or Facebook Messenger. The problems escalate when hackers can infiltrate the personal side of the house before moving laterally to penetrate the corporate side through employee access points.

Two-thirds of enterprises now have BYOD policies in place, rather than giving out company-issued phones to employees. This change has caused an exponential increase in so-called human compromise attacks through deceptive social engineering techniques. For instance, employees may become victims of “smishing” attacks through SMS text messages that deceive them into mistakenly downloading malicious software or rogue browser extensions.

In recent months, we have repeatedly seen businesses become compromised by employees falling prey to sophisticated phishing attacks. Last May, hackers gained access to a Cisco employee’s virtual private network (VPN) client software through a compromised personal Gmail account. The credentials saved in the victim’s browser were being synchronized so, in effect, the bad guys gained access to the credentials, hijacked the browser and then went into Cisco through the back door.

In August, Twilio faced a similar breach when an employee received a smishing message that impersonated Twilio’s IT department, warning of an urgent need to change login passwords. The hackers harvested those updated credentials and used them to access the data of 125 Twilio customers. Such blatant breaches at large tech companies have highlighted the fact that organizations can no longer simply rely on employees to protect themselves against complex social engineering scams.

Artificial intelligence to the rescue

By dissecting all sorts of recent breaches — whether it is a data breach, a ransomware breach or a financial fraud breach — the key point seen is that nearly all of them started with the human element being exposed on a personal mobile device. It’s not possible to stop what cannot be seen, and it only takes one wrong click or spoofed phone call to give up a username and password, opening the door to a nefarious phishing attack.

Some observers will point to a need for better employee security trainings, and that is certainly one aspect of the overall solution. But ultimately, we cannot train the security threat out of our users. Most companies already coach their employees about obvious best practices such as not clicking on an email from joe@badguy.com. Yet users continue to be outsmarted by clever phishing hooks, such as clicking on spoofed corporate websites that match the real thing with extremely detailed accuracy. When attackers pose as Google.com or Microsoft.com, distracted employees who already trust those brands may let their guards down.

From a phishing detection standpoint, the standard techniques for analyzing domain reputation scores and signatures are no longer effective either because the scams change so rapidly. So, the best way to protect people against the phishing onslaught is to put strong technical controls in place with artificial intelligence (AI) that can augment fallible human behaviors. AI can become the great equalizer by supporting user trainings with cloud-based machine learning engines that identify threats from emails and text messages for users in real time.

However, an AI solution will only be trusted by the workforce if people can trust that they are not being monitored on their own personal mobile devices. That requires a commitment by security teams to not collect personal browser behaviors or applications used by employees. Anything personal must be kept personal, requiring users to activate the AI security service on their phones themselves, which benefits both the BYOD employee and the overall business. This approach can enhance the privacy of users working on their devices while allaying fears that Big Brother may be watching them.

The human side will always present the weakest link in any security posture, as it is the least funded aspect of security budgets. Yet employees will continue to encounter dangerous phishing hooks in the vast ocean of mobile messaging, so it remains critical for security leaders to protect against this element of human fallibility with an AI-based safety net.