A report by SlashNext reveals an increase in malicious emails by 341% in the past six months. This includes a rise in BEC, phishing and other message-based attacks driven by generative AI

Notable findings from the report include: 

  • Since November 2022 (the launching of ChatGPT) malicious emails have increased by 4,151%. 
  • Credential harvesting phishing attacks have increased by 217% in the last six months. 
  • BEC attacks have increased by 29% in the last six months. 
  • 45% mobile threats are SMS smishing attacks. 

Security leaders weigh in 

Mika Aalto, Co-Founder and CEO at Hoxhunt:

“If you look closely, there’s some bad-news-good-news happening with this surge in phishing attack volume. The generative AI-enabled flood of sophisticated phishing emails is bad. But the fact that email filters still catch most of them is good. Most importantly, a well-trained workforce will spot and report the AI-boosted phishing emails as well as human-originated attacks, according to our own original research. 

“The bad news is that most companies are still relying on old-school SAT tools. These compliance-based exercises are based on yesterday’s threats and are proven to be ineffective. This SlashNext report brings into focus the need for targeted security training that keeps pace with the rapidly evolving threat landscape. BEC continues to be the top form of cybercrime, and AI is particularly concerning in a BEC attack because it can make the spoofed executive email seem more convincing via tone and deep fake technology. 

“Fluctuations in the threats hitting the technical perimeter are important, but it’s most crucial to know what attacks are bypassing technical protections, and how people are responding to attacks that land in their inboxes. For instance, we saw a 22x rise in QR phishing attacks that slipped past email filters, most of which redirected users to credential harvesters, which SlashNext reported had risen by over 200%. 

“The rise in mobile attacks such as SMS is a growing problem that can be addressed with targeted training. People click on malicious links more often on their phones, so security awareness training should help people stay alert against smishing attacks.”

Krishna Vishnubhotla, Vice President of Product Strategy at Zimperium:

“Organizations must protect their employees from phishing links, malicious QR codes and malicious attachments found in emails across all legacy and mobile endpoints. Bad actors are getting creative in designing email campaigns that bypass traditional detection mechanisms. Email attachments and links should be scrutinized by enterprises; adopting a zero-trust security model and using encrypted communication for sensitive exchanges will further guard against malicious emails. The best way to fight cybercriminals is to combine technological defenses with vigilant practices. 

“Today, we are exposed to an overwhelming number of scams. There are phishing scams, malware, lottery scams, tax scams, charity scams, fake invoices, package delivery scams, etc. And they don’t have the luxury of enterprise email filters of any sort, so they are really exposed. But even for consumers, the overall approach is the same. For protection, they should download tools to their laptops, desktops and mobile devices to help identify malicious emails. This is a good starting point. Once that’s done, the real work begins, which includes developing better cyber hygiene. We need to be watchful for destination URLs, spelling mistakes, sender emails, etc.” 

Darren Guccione, CEO and Co-Founder at Keeper Security:

“Over the past year, we have seen AI applications, like ChatGPT, gain remarkable ground in practical utilization. Cybersecurity is an arms race and bad actors are constantly evolving their tools to circumvent detection, while defenders are trying to adapt. ChatGPT or other popular artificial intelligence tools can be used on both sides of the cybersecurity landscape. For cybersecurity professionals, AI’s natural language processing capabilities enable it to streamline threat intelligence analysis, extracting valuable insights from vast datasets to stay abreast of emerging threats. ChatGPT can assist in real-time incident response by providing quick insights and suggestions during security incidents. Cybersecurity professionals can use these capabilities to analyze logs, identify potential attack vectors and recommend mitigation strategies.

“Meanwhile, a bad actor can utilize ChatGPT a number of ways, including to create convincing phishing emails. By leveraging ChatGPT or the natural language processing capabilities of other generative AI tools, bad actors can quickly and easily craft sophisticated messages tailored to specific individuals or organizations, making it more likely for recipients to fall victim to them. These emails can contain malicious links or attachments, leading to unauthorized access, data breaches or the deployment of malware. ChatGPT can also be utilized to generate deceptive content for social engineering, creating fake profiles or messages to manipulate individuals into disclosing sensitive information. 

“Not only can the tools help bad actors create content such as a believable phishing email or malicious code for a ransomware attack, but they can do so quickly and easily. The least-defended organizations will be particularly vulnerable, as the volume of attacks will likely continue to increase.

“AI in the hands of adversaries has the potential to ramp up social engineering exponentially, which is currently one of the most successful scamming tactics available. Cybercriminals can use AI for password cracking, phishing emails, deep fakes, impersonation and malware attacks. Phishing emails used to be easy to spot because they had frequent grammatical errors and spelling mistakes, but AI is now making it easy for cybercriminals to generate well-written, convincing content for phishing scams. Instead of writing their own phishing emails or text messages, cybercriminals are leveraging AI to write the scams for them. Because AI algorithms can analyze large amounts of data, they can also create fake personas, such as impersonating someone’s voice or creating a deep fake video.”