Policymakers and government leaders have rightfully grown alarmed by the rapid rise of generative artificial intelligence (AI) tools such as ChatGPT, based on concerns that the advances in AI could soon outpace our protections for data privacy and security.  

Generative AI has introduced society to untold possibilities for enhanced productivity and creativity. Regular office workers have taken up these tools to conduct research, analyze data, write documents, or create videos. In this way, generative AI has transformed the public’s perception of smart machines almost overnight — but as with every innovation, it can be used for both good and bad.

Generative AI is already having profound impacts on the security landscape since the launch of ChatGPT last November, particularly in relation to cyberattacks launched through common messaging apps including email and SMS text messaging. In short, powerful generative AI chatbots can now learn from large language models that provide enormous amounts of contextualized knowledge to enable the automation of sophisticated cyberattacks at a massive scale.

Security leaders have seen a major spike in uniquely tailored phishing threats coming from these gen AI tools. AI chatbots can spin up countless new attack variants simply by altering malware code to bypass standard detection engines, or by drafting and delivering thousands of similarly cloned scam emails at virtually no cost to the attackers. In a recent survey of more than 650 senior cybersecurity experts, 75% reported an increase in attacks over the last 12 months and nearly half (46%) stated that generative AI has increased their organization’s vulnerability to attacks.

Perhaps even more concerning is the rise of AI tools proliferating on the dark web — such as WormGPT, FraudGPT and others — that are specifically designed to apply generative AI technologies for criminal purposes. Now, we are even seeing the likes of BadGPT and EvilGPT being used to create devastating malware, ransomware and business email compromise (BEC) attacks. 

Another worrying development involves the threat of AI “jailbreaks,” in which hackers cleverly remove the guardrails for the legal use of gen AI chatbots. In this way, attackers can turn tools such as ChatGPT into weapons that trick victims into giving away personal data or login credentials, which can lead to further damaging incursions.

The FBI estimates that BEC attacks specifically have cost nearly $51 billion in the last year — a financial impact that is only expected to grow. And because these attacks can be iterated so quickly, it is impossible for human threat researchers to respond fast enough. The only way to defend against AI attacks is by adopting the same AI tools and techniques to protect users. That is why it is essential for government agencies and businesses to invest in cybersecurity tools that apply automation, machine learning, and artificial intelligence to defend against these novel threats.

AI security is now required to fight back against AI attacks, based on the need for data augmentation and data cloning techniques that allow us to make sense of incoming threats in real-time. Generative AI also gives us the ability to clone thousands of initial core threats and anticipate thousands of other expected variants that are yet to come. In this way, proactive AI defenses use automation to guard users from being socially engineered, both now and into the future.

Emerging AI security solutions are enhanced by computer vision systems to detect phony email attachments, bad links and imposter websites with blinding speed before any users can get compromised. Likewise, natural language processing tools provide added context to ensure the authenticity of written language, tone of voice, accents and other verbal elements.

Let there be no doubt that we have entered a dangerous new era of malicious generative AI. Tech leaders, legislators and regulators must address the uncomfortable truth that cybercriminals have taken the early advantage in this long game of AI-based cybersecurity. Fighting back will require us to make strong commitments to implement AI-based protections against these smart machines as they take over more control from the cybercriminals.