The Email Insider Threat Has Evolved in the Era of Generative AI
Seemingly harmless tools such as AI grammar checkers pose a number of intellectual property risks

What’s scarier than a horde of threat actors constantly testing your defenses? The insider threat: rogues already operating within your defenses!
Every security practitioner knows about insider threats, but it’s worth reflecting on how this kind of attack has evolved, particularly in the age of generative AI. Let’s assess where we are as we head into 2026, with a focus on email security.
Email security has become the next critical challenge in cybersecurity, and for good reason: email was designed in 1971 with a fundamental flaw — it assumed everyone was a good actor. Case in point: you used to be able to send mail from president@whitehouse.gov without any verification. This trust-based architecture has created vulnerabilities that attackers continue to exploit today.
Historically the archetype insider threat is the disgruntled employee who intentionally sabotages internal infrastructure or mails hundreds of sensitive documents to their personal Gmail account. But this framing fails to account for two other major insider threats.
The first is APT-style code installed via malicious email attachment. Without adequate email security, emails with these payloads can get delivered to end users. Attackers using chatbots can now craft malicious emails with perfect grammar and industry-specific (or even recipient-specific) targeting in no time. The unsuspecting recipient opens the attachment, triggering malicious code that exploits a vulnerability in the application handling the file, giving the attacker access to their machine.
When attackers want to steal data, their malware can hijack a user's Outlook to automatically email sensitive files it finds on their computer. This "insider threat" doesn't involve any actual insider — just malicious code accidentally installed by an employee.
Modern malware can even use AI to scan files and identify valuable information like passwords or payroll data. Since it runs on the employee's own computer, it costs attackers nothing and can search undetected for weeks.
Another common attack uses malicious HTML email attachments. While Outlook blocks dangerous code in email bodies, when someone opens an HTML attachment, it launches in their web browser — where the code runs freely. This lets attackers create fake login pages that look identical to your company's real Microsoft 365 portal, complete with your branding. Employees who enter their credentials unknowingly send them straight to attackers.
While multifactor authentication helps, don't assume it makes you invulnerable — more sophisticated attacks can intercept text message codes too.
The solution? Deploy email security systems that analyze every attachment and understand JavaScript threats. Your vendor should clearly explain how their system addresses these risks, not just mention "AI."
The second major threat comes from browser extensions and Outlook plugins. While these tools can't install malicious programs, they can read your emails and send content to third-party services. That grammar checker or AI writing assistant? It might be using your sensitive business emails as training data for its AI models — and there's no reliable way to prevent AI from later revealing that training data.
Here's the irony: security vendors are rushing to add AI to their products. Your Data Loss Prevention system probably still uses outdated pattern-matching technology from the 1990s. As vendors upgrade to AI-powered systems, make sure you know which AI models they're using, who hosts them, and how your data is handled.
The traditional view of insider threats still applies; there may indeed be malevolent employees hoping to harm their employer through sabotage or data leakage. But attackers and LLMs have both changed the game.
We now need to think precisely about exactly where third-party code runs — in attachments, in HTML scripts, in third party services connected via browser and Outlook extensions. And we should expect our email security vendors to explain how they address these new exfiltration vectors.
As attackers evolve, so must our tools. And as LLMs become ubiquitous, vendors must solve the very difficult challenge of countering adversaries armed with human-level AI tools.
Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!







