Security Magazine logo
search
cart
facebook twitter linkedin youtube
  • Sign In
  • Create Account
  • Sign Out
  • My Account
Security Magazine logo
  • NEWS
    • Security Newswire
    • Technologies & Solutions
  • MANAGEMENT
    • Leadership Management
    • Enterprise Services
    • Security Education & Training
    • Logical Security
    • Security & Business Resilience
    • Profiles in Excellence
  • PHYSICAL
    • Access Management
    • Fire & Life Safety
    • Identity Management
    • Physical Security
    • Video Surveillance
    • Case Studies (Physical)
  • CYBER
    • Cybersecurity News
    • More
  • BLOG
  • COLUMNS
    • Career Intelligence
    • Cyber Tactics
    • Cybersecurity Education & Training
    • Leadership & Management
    • Security Talk
  • EXCLUSIVES
    • Annual Guarding Report
    • Most Influential People in Security
    • The Security Benchmark Report
    • Top Guard and Security Officer Companies
    • Top Cybersecurity Leaders
    • Women in Security
  • SECTORS
    • Arenas / Stadiums / Leagues / Entertainment
    • Banking/Finance/Insurance
    • Construction, Real Estate, Property Management
    • Education: K-12
    • Education: University
    • Government: Federal, State and Local
    • Hospitality & Casinos
    • Hospitals & Medical Centers
    • Infrastructure:Electric,Gas & Water
    • Ports: Sea, Land, & Air
    • Retail/Restaurants/Convenience
    • Transportation/Logistics/Supply Chain/Distribution/ Warehousing
  • EVENTS
    • Industry Events
    • Webinars
    • Solutions by Sector
    • Security 500 Conference
  • MEDIA
    • Interactive Spotlight
    • Photo Galleries
    • Podcasts
    • Polls
    • Videos
      • Cybersecurity & Geopolitical Discussion
      • Ask Me Anything (AMA) Series
  • MORE
    • Call for Entries
    • Classifieds & Job Listings
    • Continuing Education
    • Newsletter
    • Sponsor Insights
    • Store
    • White Papers
  • EMAG
    • eMagazine
    • This Month's Content
    • Advertise
  • SIGN UP!
CybersecuritySecurity Enterprise ServicesSecurity Leadership and ManagementLogical Security

Rogue AI risks and how to mitigate them

By Jacob Birmingham
Hand in front of binary code

Image via Pixabay

November 6, 2023

Since we can all be prone to hyperbole, it could be easy to look at the discourse around AI — especially as ChatGPT rose to prominence in late 2022 and early 2023 — and think people are just overreacting when they say it’ll be the end of the world or of work as we know it. New technologies come around all the time, with similar proclamations made, and the world hasn’t ended yet.

But make no mistake: AI represents the possibility of a security threat that is powerful, broad-reaching and hard to stop. As AI advances, the possibility it could be misused or lead to consequences not intended by its creators or users also grows. There are few guardrails in place to keep AI in check; while federal legislation still hasn’t arrived, individual states are taking steps to address potential issues, though those are largely focused on specific use cases and not the broad impact of AI as a whole.

Just look at the way Bing’s AI threatened users or how generative models have been used to generate harmful code at a whim — the threat of AI “going rogue” is a real one, and the technology would be a powerful one to contend with were that to happen. AI is becoming more complex, especially as it learns from its own activity, and the potential of it growing beyond our ability to rein it in is a real one.

What the threats will look like

AI systems could “go rogue” thanks to improper controls or a lack of proper oversight. They could have their programming tampered with by bad actors in order to hack another system, promote disinformation campaigns, or even take part in espionage activities. As AI tools proliferate, they could even be designed with the sole purpose of carrying out nefarious purposes like cyberattacks.

The ability for us to integrate AI with many other aspects of our lives — banking, social media, transportation, healthcare — is a mixed blessing, since the same technology that can empower us in each of these areas also, as a result, has access to and control critical systems that govern how we live.

AI’s incredible speed when sorting through data and making decisions makes it hard to keep up with in real-time if we had to defend against rogue AI. It’s also flexible by nature, able to learn on the fly and adapt to new situations, even replicating itself to a scale that could attack multiple systems simultaneously. And it’s smart enough to first go after the defenses in place that may be used to stop it. What would normally be reserved for the realm of sci-fi or horror is quickly becoming a realistic possibility.

Complicating any would-be reactive measures is rogue AI’s ability to replicate human voices and display what would be considered normal human behavior. AI can replicate human voices with startling accuracy, and the combination of audio and visual “deepfakes” that could be generated easily by AI is another real threat. The technology can have serious implications for the everyday person, a business, or even countries—it’s not inconceivable to imagine a deepfake, like a politician supposedly saying something about another nation, spurring on an international political incident with deadly ramifications. 

How developers and security teams can head off rogue AI

Given the severity of the threats that rogue AI could lead to, a proactive approach from all parties is a must — in some situations, there is no easy fix, but prevention will go a long way.

Developers working on or with AI tools need to make clear guidelines for their entire organization — and partners — on what constitutes ethical use of AI. They should hold all parties accountable for sticking to responsible development practices and implement the strongest security measures they can (since even the strongest policies have room for mistakes and it’s possible a rogue individual will purposely flout them). Developers should review their AI systems and practices frequently, assessing risks and making a plan to address any shortcomings.

Similarly, businesses themselves need to be assertive with their actions and policies around AI — an enterprise does not want to find itself on the back foot and have its everyday processes disrupted by rogue AI. Regular risk assessments and thorough contingency planning were something to go wrong will keep businesses moving ahead. This should include an investment into AI-specific security and risk management, which means training employees (not just those in the security department) to identify the signs of AI threats and building a thorough cybersecurity defense.

Like developers, all enterprises should have a clear outline for how AI is used, if it’s allowed to be at all, in the organization. Establishing clear expectations and boundaries will help ensure that safety and ethical concerns are kept at the forefront and the risk of rogue AI is lessened as much as it can be.

Staying ahead of AI

All technology evolves quickly, but no subset more rapidly than an AI system that is learning as it goes along. It’s important that all businesses and developers collaborate with others in the field and stay up to date on the latest threats and protective measures. Sharing knowledge and encouraging policymakers to enact reasonable and effective industry-wide standards for this technology can keep us all safe and limit the chances of a rogue AI doing real damage to our society.

While we’ve spent a lot of time on the threats of AI, it would be a mistake not to make clear that the possibility of a threatening rogue AI does not cancel out the incredible benefits that responsible AI can bring to enterprises and society as a whole. We simply need to take this moment to prepare ourselves and our company and create a culture of AI security and ethical use so that we can allow the technology to flourish in the right direction, bringing positive change and advancement for all.

KEYWORDS: artificial intelligence (AI) cyber threats deepfakes risk assessment

Share This Story

Jacob Birmingham is the VP of Product Development at Camelot Secure.

Blog Topics

Security Blog

On the Track of OSAC

Blog Roll

Security Industry Association

Security Magazine's Daily News

SIA FREE Email News

SDM Blog

Manage My Account
  • Security Newsletter
  • eMagazine Subscriptions
  • Manage My Preferences
  • Online Registration
  • Mobile App
  • Subscription Customer Service

More Videos

Sponsored Content

Sponsored Content is a special paid section where industry companies provide high quality, objective, non-commercial content around topics of interest to the Security audience. All Sponsored Content is supplied by the advertising company and any opinions expressed in this article are those of the author and not necessarily reflect the views of Security or its parent company, BNP Media. Interested in participating in our Sponsored Content section? Contact your local rep!

close
  • critical event management
    Sponsored byEverbridge

    Why a Unified View Across IT, Continuity, and Security Makes or Breaks Crisis Response

  • Charlotte Star Room
    Sponsored byAMAROK

    In an Uncertain Economy, Security Is a Necessity - Not an Afterthought

  • Sureview screen
    Sponsored bySureView Systems

    The Evolution of Automation in the Command Center

Popular Stories

Cybersecurity trends of 2025

3 Top Cybersecurity Trends from 2025

Red laptop

Security Leaders Discuss SitusAMC Cyberattack

Green code

Logitech Confirms Data Breach, Security Leaders Respond

Neon human and android hands

65% of the Forbes AI 50 List Leaked Sensitive Information

The Louvre

After the Theft: Why Camera Upgrades Should Begin With a Risk Assessment

Top Cybersecurity Leaders

Events

September 18, 2025

Security Under Fire: Insights on Active Shooter Preparedness and Recovery

ON DEMAND: In today’s complex threat environment, active shooter incidents demand swift, coordinated and well-informed responses.

December 11, 2025

Responding to Evolving Threats in Retail Environments

Retail security professionals are facing an increasingly complex array of security challenges — everything from organized retail crime to evolving cyber-physical threats and public safety concerns.

View All Submit An Event

Products

Security Culture: A How-to Guide for Improving Security Culture and Dealing with People Risk in Your Organisation

Security Culture: A How-to Guide for Improving Security Culture and Dealing with People Risk in Your Organisation

See More Products
×

Sign-up to receive top management & result-driven techniques in the industry.

Join over 20,000+ industry leaders who receive our premium content.

SIGN UP TODAY!
  • RESOURCES
    • Advertise
    • Contact Us
    • Store
    • Want More
  • SIGN UP TODAY
    • Create Account
    • eMagazine
    • Newsletter
    • Customer Service
    • Manage Preferences
  • SERVICES
    • Marketing Services
    • Reprints
    • Market Research
    • List Rental
    • Survey/Respondent Access
  • STAY CONNECTED
    • LinkedIn
    • Facebook
    • YouTube
    • X (Twitter)
  • PRIVACY
    • PRIVACY POLICY
    • TERMS & CONDITIONS
    • DO NOT SELL MY PERSONAL INFORMATION
    • PRIVACY REQUEST
    • ACCESSIBILITY

Copyright ©2025. All Rights Reserved BNP Media.

Design, CMS, Hosting & Web Development :: ePublishing