Security Magazine logo
search
cart
facebook twitter linkedin youtube
  • Sign In
  • Create Account
  • Sign Out
  • My Account
Security Magazine logo
  • NEWS
    • Security Newswire
    • Technologies & Solutions
  • MANAGEMENT
    • Leadership Management
    • Enterprise Services
    • Security Education & Training
    • Logical Security
    • Security & Business Resilience
    • Profiles in Excellence
  • PHYSICAL
    • Access Management
    • Fire & Life Safety
    • Identity Management
    • Physical Security
    • Video Surveillance
    • Case Studies (Physical)
  • CYBER
    • Cybersecurity News
    • More
  • BLOG
  • COLUMNS
    • Cyber Tactics
    • Leadership & Management
    • Security Talk
    • Career Intelligence
    • Leader to Leader
    • Cybersecurity Education & Training
  • EXCLUSIVES
    • Annual Guarding Report
    • Most Influential People in Security
    • The Security Benchmark Report
    • The Security Leadership Issue
    • Top Guard and Security Officer Companies
    • Top Cybersecurity Leaders
    • Women in Security
  • SECTORS
    • Arenas / Stadiums / Leagues / Entertainment
    • Banking/Finance/Insurance
    • Construction, Real Estate, Property Management
    • Education: K-12
    • Education: University
    • Government: Federal, State and Local
    • Hospitality & Casinos
    • Hospitals & Medical Centers
    • Infrastructure:Electric,Gas & Water
    • Ports: Sea, Land, & Air
    • Retail/Restaurants/Convenience
    • Transportation/Logistics/Supply Chain/Distribution/ Warehousing
  • EVENTS
    • Industry Events
    • Webinars
    • Solutions by Sector
    • Security 500 Conference
  • MEDIA
    • Videos
      • Cybersecurity & Geopolitical Discussion
      • Ask Me Anything (AMA) Series
    • Podcasts
    • Polls
    • Photo Galleries
  • MORE
    • Call for Entries
    • Classifieds & Job Listings
    • Continuing Education
    • Newsletter
    • Sponsor Insights
    • Store
    • White Papers
  • EMAG
    • eMagazine
    • This Month's Content
    • Advertise
  • SIGN UP!
CybersecuritySecurity Leadership and ManagementLogical SecuritySecurity & Business Resilience

Are AI data poisoning attacks the new software supply chain attack?

By Sitaram Iyer
Computer screen displaying code

Image via Unsplash

April 18, 2024

With rapid AI adoption happening across varying business units, maintaining the integrity of those systems — and preventing AI data poisoning attacks — is a growing concern.

But how do these attacks occur, and why should businesses be worried? 

Much of it has to do with third-party access to business systems and data. In Venafi’s 2023 State of Cloud Native Security Report, 75% of security professionals stated their software supply chain presents their biggest security blind spot. 

AI models are more susceptible to hacker exploits because they are programmed on vast datasets to generate the outputs they do. For example, Open AI’s ChatGPT-4 consists of eight models, each trained on approximately 220 billion parameters, or 1.76 trillion parameters. A training pipeline of that size introduces risk to connected systems, services, and devices and the AI itself. 

Tracking data provenance and maintaining integrity across the collection, storage and preparation of that data is crucial. Without it, AI models can be easily swayed, even by simple, minor manipulation. 

What is an AI data poisoning attack?

AI data poisoning attacks occur when threat actors corrupt the underlying data used to train and operate a machine learning model. By doing this, threat actors effectively manipulate the algorithms used to build the system, poisoning models with as little as 0.1% of their training data. 

There are several ways to conduct this type of attack, but they’re typically carried out with the desire to change the very function and outputs of a model, such as compromising standard operating procedures causing AI systems to behave erratically, discriminately or unsafely. 

How are AI poisoning attacks and software supply chain attacks related? 

If a security team is trying to grapple with AI cybersecurity, the issue of maintaining data privacy and integrity has no doubt already cropped up. It’s like software supply chain issues, but on a larger, even more complex scale. 

If a company is relying on a web-based AI model, and that compromised model has or gains access to additional systems in the organization — including production or distribution environments — the company may experience an impact similar to that of a supply chain attack.

How do AI data poisoning attacks happen?

Hackers have quite the arsenal to pick from when deciding how to carry out an AI data poisoning attack, including:

  • Backdoor tampering
  • Flooding
  • API targeting

Backdoor tampering

Backdoor tampering can occur in a few different ways, including untrusted source material or through an extremely broad training scope. In a recent study, researchers discovered that it’s possible to deliberately misalign models that, during training, appear to behave normally. However, when they are pushed into production, behave according to unsafe, concealed instructions. Since the AI showed no signs of malignant behavior during training, it gave the humans training it a false sense of security, and if this were a real-world situation where the “harmless” AI was pushed into production, it could result in disaster.

Flood attacks

Flood attacks occur when hackers send copious amounts of non-malicious data through an AI system. Once the AI system has been trained to recognize this correspondence and begins to see it as a “normal” pattern of communication, a hacker will then attempt to slip a malicious message (like a phishing email) past an AI system. If the flood attack was successful, the AI will let that malicious message pass by, undetected.

API targeting

Large Language Models (LLMs) with access to APIs present several security issues, and without robust authentication procedures, LLMs can call on and connect APIs without a user’s knowledge. If this LLM were compromised, it could be convinced to behave unsafely, or distribute malware further down the software supply chain. 

How Retrieval Augmented Generation (RAG) can help prevent AI data poisoning attacks

Many AI models, including those from OpenAI, are trained on vast internet data sets, posing challenges in verifying and authenticating the data. To address this, experts suggest integrating Refined Access Guidance (RAG) into AI models. While not all models support RAG, it can safeguard organizations from AI model poisoning by providing tailored context atop the base Language Model (LLM). 

Instead of relying solely on broad model outputs, RAG furnishes refined information, such as business-specific data, reducing the risk of AI data poisoning and generating more coherent content. As AI models are built on extensive data, understanding their creation pipeline is already complex. Handling compromised data or “forgetting” information is costly and time-consuming, impacting performance. 

How can machine identity management help prevent AI data poisoning attacks?

By building a solid foundation through machine identity management, security leaders can ensure that AI data poisoning attacks don’t have the opportunity to wreak havoc on an organization’s AI technologies, systems or customers’ systems. Examples include:

  • When using third-party AI models, treat them like any third-party software: authenticate access and evaluate thoroughly before deployment.
  • Ensure robust authentication for AI and non-AI APIs, connecting only trusted APIs and enabling blocking of suspicious requests. 
  • Implement secure code signing to prevent unauthorized executions. Maintain end-to-end security and traceability of AI model origins.
  • Adopt a centralized, unified control plane for machine identity management. With a control plane, security leaders can discover, monitor and automate the orchestration of all types of machine identities across all environments and teams, making it easy to see which AI models can be trusted.

The proliferation of AI/ML tools, and their enormous data training sets (often with uncertain origins), opens the door for new types of software supply chain threats, including the poisoning of AI training data. To safely capitalize on AI technology, companies need to manage all types of machine identities, including TLS/SSL, code signing, mTLS, SPIFFE, SSH and others. By taking the steps above, organizations will be better prepared to safeguard against growing AI toolsets and risks. 

KEYWORDS: Artificial Intelligence (AI) Security data concerns data protection software security supply chain cyber security

Share This Story

Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!

Sitaram iyer headshot

Sitaram Iyer is the Senior Director of Cloud Native Solutions at Venafi. Image courtesy of Iyer

Recommended Content

JOIN TODAY
To unlock your recommendations.

Already have an account? Sign In

  • Security's Top Cybersecurity Leaders 2024

    Security's Top Cybersecurity Leaders 2024

    Security magazine's Top Cybersecurity Leaders 2024 award...
    Cybersecurity
    By: Security Staff
  • cyber brain

    The intersection of cybersecurity and artificial intelligence

    Artificial intelligence (AI) is a valuable cybersecurity...
    Logical Security
    By: Pam Nigro
  • artificial intelligence AI graphic

    Assessing the pros and cons of AI for cybersecurity

    Artificial intelligence (AI) has significant implications...
    Cybersecurity
    By: Charles Denyer
Manage My Account
  • Security eNewsletter & Other eNews Alerts
  • eMagazine Subscriptions
  • Manage My Preferences
  • Online Registration
  • Mobile App
  • Subscription Customer Service

More Videos

Sponsored Content

Sponsored Content is a special paid section where industry companies provide high quality, objective, non-commercial content around topics of interest to the Security audience. All Sponsored Content is supplied by the advertising company and any opinions expressed in this article are those of the author and not necessarily reflect the views of Security or its parent company, BNP Media. Interested in participating in our Sponsored Content section? Contact your local rep!

close
  • Crisis Response Team
    Sponsored byEverbridge

    Automate or Fall Behind – Crisis Response at the Speed of Risk

  • Perimeter security
    Sponsored byAMAROK

    Why Property Security is the New Competitive Advantage

  • Duty of Care
    Sponsored byAMAROK

    Integrating Technology and Physical Security to Advance Duty of Care

Popular Stories

Internal computer parts

Critical Software Vulnerabilities Rose 37% in 2024

Coding

AI Emerges as the Top Concern for Security Leaders

Half open laptop

“Luigi Was Right”: A Look at the Website Sharing Data on More Than 1,000 Executives

Person working on laptop

Governance in the Age of Citizen Developers and AI

Shopping mall

Victoria’s Secret Security Incident Shuts Down Website

2025 Security Benchmark banner

Events

June 24, 2025

Inside a Modern GSOC: How Anthropic Benchmarks Risk Detection Tools for Speed and Accuracy

For today's security teams, making informed decisions in the first moments of a crisis is critical.

July 17, 2025

Tech in the Jungle: Leveraging Surveillance, Access Control, and Technology in Unique Environments

From animal habitats to bustling crowds of visitors, a zoo is a one-of-a-kind environment for deploying modern security technologies.

View All Submit An Event

Products

Security Culture: A How-to Guide for Improving Security Culture and Dealing with People Risk in Your Organisation

Security Culture: A How-to Guide for Improving Security Culture and Dealing with People Risk in Your Organisation

See More Products

Related Articles

  • supply-chain-sec-freepik1170x658v6.jpg

    Cloud attacks on the supply chain are a huge concern

    See More
  • cyber

    Lazarus misuses legitimate security software in a supply-chain attack in South Korea

    See More
  • Locked vault

    Fortifying the software supply chain: A crucial security practice

    See More
×

Sign-up to receive top management & result-driven techniques in the industry.

Join over 20,000+ industry leaders who receive our premium content.

SIGN UP TODAY!
  • RESOURCES
    • Advertise
    • Contact Us
    • Store
    • Want More
  • SIGN UP TODAY
    • Create Account
    • eMagazine
    • eNewsletter
    • Customer Service
    • Manage Preferences
  • SERVICES
    • Marketing Services
    • Reprints
    • Market Research
    • List Rental
    • Survey/Respondent Access
  • STAY CONNECTED
    • LinkedIn
    • Facebook
    • YouTube
    • X (Twitter)
  • PRIVACY
    • PRIVACY POLICY
    • TERMS & CONDITIONS
    • DO NOT SELL MY PERSONAL INFORMATION
    • PRIVACY REQUEST
    • ACCESSIBILITY

Copyright ©2025. All Rights Reserved BNP Media.

Design, CMS, Hosting & Web Development :: ePublishing