35,000 Users Targeted in Phishing Campaign in Just Two Days

Between the dates of Apr. 14 and Apr. 16, a sophisticated phishing campaign was observed targeting more than 35,000 users. This occurred across more than 13,000 organizations in 26 countries, with most targets in the United States (92%).
The campaign focused on a variety of sectors, including:
- Healthcare and life sciences (19%)
- Financial services (18%)
- Professional services (11%)
- Technology and software (11%)
The Microsoft Defender Research team observed several distinct waves of message distribution during the two days.
In this campaign, emails posed as compliance or regulatory communications, claiming a “code of conduct review” had been launched. The emails included organization-specific names in the text and prompted targets open personalized attachments in order to review case materials.
The emails appeared legitimate due to realistic-looking notices about the message being sent via an authorized internal channel as well as claims that links had been examined and approved for secure access. Additionally, messages contained a note at the end stating that contents were encrypted with Paubox, a trusted service connected with HIPAA-compliant communications.
Below, security leaders share their insights on this phishing campaign.
Security Leaders Weigh In
Mika Aalto, Co-Founder and CEO at Hoxhunt:
Phishing is rarely the end goal. It’s typically the front door to something larger, including data theft, cloud compromise, or ransomware. Put it this way: If ransomware is the explosion, phishing is often the spark.
When phishing links lead to trusted cloud tools, collaboration platforms, or no-code services, the activity looks normal on the surface. That makes detection harder because users are no longer looking for red flags in grammar and mismatched URLs. They’re chasing behavior that blends into daily business operations. The new reality is that attackers don’t always break systems when they can borrow them.
Recent research found a step change at the turn of 2025 to 2026, when AI-generated phishing surged 14-fold almost overnight. The big shift isn’t brand-new tactics and zero-day messaging, it’s the modernization of old attacks. Traditional phishing kits are being upgraded with cleaner formatting, better writing, and more personalized messaging that can be generated at scale. Phishing never really went away. It just got an upgrade.
With that being said, people are trained to obey authority, and deepfake and callback phishing attacks are designed to push people into bypassing normal checks. Organizations need to normalize ‘see something, say something’ behavior and make verification frictionless. Behavioral monitoring tools can help flag unusual actions, but the real challenge is cultural: giving employees confidence that slowing down to verify is expected, supported, and reinforced through Human Risk Management practices.
Phishing has evolved beyond static text and awareness must do the same. The entire concept of ‘security awareness training’ is outdated if it stops at awareness. The next generation of defense is behavioral, not informational. We’re moving from telling people what to do to shaping what they actually do, in real time. We are building an essential set of security reflexes and instincts.
James Maude, Field CTO at BeyondTrust:
With the rise of Adversary in the Middle (AiTM) toolkits such as EvilGinx and Phishing as a Service (PhaaS), we are seeing growing demand for a network of compromised devices to use as proxy exit nodes to make use of phished and compromised identities. The continued rise of identity threats and botnets is presenting a real challenge when it comes to enterprise security as many of their traditional defenses are simply not able to detect and prevent them in time. This is why is important to take an identity centric approach to security and focus on reducing your identity attack surface with least privilege and a holistic approach. Identity threats are here to stay, and with the rise of AI, we can only expect them to increase in scale.
Nicole Carignan, Senior Vice President, Security & AI Strategy, and Field CISO at Darktrace:
Traditional phishing emails once carried obvious warning signs, including poor grammar, inconsistent branding, or unusual formatting. Today, AI has removed many of those indicators. Attackers can generate highly polished, brand-consistent communications that closely mirror authentic organizations, and even tailor messages using publicly available or previously compromised data.
At the same time, AI allows adversaries to operate with greater speed and precision. Campaigns can be created, tested, and refined in real time, producing huge volumes of highly targeted messages that are far more likely to succeed. As a result, phishing is no longer simply a volume-based threat, it’s become a quality and personalization problem, making it increasingly difficult to detect with the human eye alone.
Rex Booth, Chief Information Security Officer at SailPoint:
The true danger of many phishing schemes lies in their ability to grant attackers access to credentials, enabling them to masquerade as trusted insiders. With AI in play, these campaigns are becoming increasingly sophisticated and harder to detect. This makes it imperative for users to adopt robust identity security best practices, including changing passwords frequently and enabling multi-factor authentication, and for organizations to prioritize identity as the new control plane.
We’ve been waiting for this offensive disruption from AI for a while now. Attacks at scale and superhuman speed are the most obvious first step. Fortunately, many campaigns still require human intervention to execute. The scarier scenario is when adversary AI starts running rampant through your enterprise without the need for action by the victim.
Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!






