Like any technology, AI holds the potential to be weaponized, and more of this type of activity is certainly on the horizon. Cybersecurity leaders have to beat bad actors to the chase by understanding how AI will be weaponized and being able to confront it head-on. To do this, it’s important for senior leaders understand how AI can be used by bad actors, in order to ensure their organizations are two steps ahead.  

The Weaponization of AI 

Cyber criminals are opportunistic – so it’s not surprising that as AI grows in adoption and sophistication, cybercriminals are also looking to seize upon its potential. This isn’t exactly new – back in 2018, the Electronic Frontier Foundation was warning about all of the potential malicious uses of AI. This includes threatening digital, physical and political security. Recently, attack methodologies have become more sophisticated by integrating the precursors of AI and swarm technology. Bot swarms could be used to infiltrate a network, overwhelm internal defenses and efficiently find and extract data.

And where it once took months for humans to hack into a network, AI and machine learning can reduce that process to days. As AI creates more of these weapons, they will become a commodity. The prices will drop, making them accessible to more and more cybercriminals.

A recent report by Nokia revealed that cybercriminals are using AI-powered botnets to find specific vulnerabilities in Android devices and then exploit those vulnerabilities by loading data-stealing malware that is usually only detected after the damage has been done.  As machine learning models begin to be applied to this “fuzzing” technique,  FortiGuard Labs predicts an increase in zero-day attacks targeting programs and platforms, which will be a significant game changer for cybersecurity. By using Artificial Intelligence Fuzzing (AIF), bad actors will be able to automatically mine software for zero-day exploits.

More and more, malicious actors are leveraging automated and scripted techniques that exponentially increase the speed and scale of attacks. Mapping networks, discovering targets to attack, finding where those targets are weak, blueprinting each target to conduct virtual penetration testing, and then building and launching a custom attack can be fully automated using AI. This significantly increases the volume of attacks a bad actor can launch in a given period of time.

Complexity and Slow Detection

Consequently, enterprises are struggling to keep up with AI-driven cyberattacks. One reason for their difficulty is the separated classification of many network security architectures. The typical organization uses more than 30 security-related point products within their environments, which makes it hard to share threat information in real time. Getting a big-picture view of the organization’s overall security posture means security and network staff must manually consolidate data from all the different security applications.

In addition, organizations are often not able to respond in a coordinated way to an attack against the corporate network. This makes the response slower and less effective. IT security teams struggle to detect attacks faster, even as cybercriminals take every advantage of their rapidly shrinking exploit times. The average breach detection gap (BDG)—the time between the initial breach of a network and its discovery—is 245 days in the U.S., according to the 2019 Ponemon Cost of a Data Breach Report.  

Overcoming the Skills Gap 

Security leaders are always looking to bolster their teams with security experts who have the right skills and experience – but they tend to be both costly and in short supply. Finding individuals with the experience and skills needed to design and implement AI-driven security is even more difficult. 

This is particularly dangerous for IT security teams, since as AI continues to evolve, the malicious use of it will continue to evolve, as well. Organizations are facing attacks that leverage self-learning technologies, which can quickly assess vulnerabilities, select or adapt malware, and actively counter security efforts to stop them. Using AI with emerging attack types like swarmbots will enable bad actors to break down an attack into its functional elements. Then they can assign those elements to different members of a swarm and use interactive communications across the swarm to speed the rate of an attack. 

The only effective defense against such AI-enhanced attacks strategies are solutions that use those same strategies – fighting fire with fire. Security leaders can level the playing field by taking a few pages from the playbook of cybercriminals as they re-assess their security technology strategies. For instance, it does not take a hacker to realize that a common code base reduces costs and speeds implementation, efficient information sharing improves the odds of success, and AI is a powerful analytical lever.

Get the Upper Hand with AI

AI is being used for both positive and negative purposes. Evolutions in attack and defense will continue as long as there is data to be stolen and protected. Despite organizations’ best efforts, the breach detection gap is not shrinking, and at a time when there are fewer cybersecurity professionals with the needed skills. But by looking at how cybercriminals are using AI and continuing to innovate, organizations can stay one or more steps ahead of their adversaries. Use the recommendations above to outsmart the bad actors and keep your network safe.