With rapid AI adoption happening across varying business units, maintaining the integrity of those systems — and preventing AI data poisoning attacks — is a growing concern.
But how do these attacks occur, and why should businesses be worried?
Much of it has to do with third-party access to business systems and data. In Venafi’s 2023 State of Cloud Native Security Report, 75% of security professionals stated their software supply chain presents their biggest security blind spot.
AI models are more susceptible to hacker exploits because they are programmed on vast datasets to generate the outputs they do. For example, Open AI’s ChatGPT-4 consists of eight models, each trained on approximately 220 billion parameters, or 1.76 trillion parameters. A training pipeline of that size introduces risk to connected systems, services, and devices and the AI itself.
Tracking data provenance and maintaining integrity across the collection, storage and preparation of that data is crucial. Without it, AI models can be easily swayed, even by simple, minor manipulation.
What is an AI data poisoning attack?
AI data poisoning attacks occur when threat actors corrupt the underlying data used to train and operate a machine learning model. By doing this, threat actors effectively manipulate the algorithms used to build the system, poisoning models with as little as 0.1% of their training data.
There are several ways to conduct this type of attack, but they’re typically carried out with the desire to change the very function and outputs of a model, such as compromising standard operating procedures causing AI systems to behave erratically, discriminately or unsafely.
How are AI poisoning attacks and software supply chain attacks related?
If a security team is trying to grapple with AI cybersecurity, the issue of maintaining data privacy and integrity has no doubt already cropped up. It’s like software supply chain issues, but on a larger, even more complex scale.
If a company is relying on a web-based AI model, and that compromised model has or gains access to additional systems in the organization — including production or distribution environments — the company may experience an impact similar to that of a supply chain attack.
How do AI data poisoning attacks happen?
Hackers have quite the arsenal to pick from when deciding how to carry out an AI data poisoning attack, including:
- Backdoor tampering
- Flooding
- API targeting
Backdoor tampering
Backdoor tampering can occur in a few different ways, including untrusted source material or through an extremely broad training scope. In a recent study, researchers discovered that it’s possible to deliberately misalign models that, during training, appear to behave normally. However, when they are pushed into production, behave according to unsafe, concealed instructions. Since the AI showed no signs of malignant behavior during training, it gave the humans training it a false sense of security, and if this were a real-world situation where the “harmless” AI was pushed into production, it could result in disaster.
Flood attacks
Flood attacks occur when hackers send copious amounts of non-malicious data through an AI system. Once the AI system has been trained to recognize this correspondence and begins to see it as a “normal” pattern of communication, a hacker will then attempt to slip a malicious message (like a phishing email) past an AI system. If the flood attack was successful, the AI will let that malicious message pass by, undetected.
API targeting
Large Language Models (LLMs) with access to APIs present several security issues, and without robust authentication procedures, LLMs can call on and connect APIs without a user’s knowledge. If this LLM were compromised, it could be convinced to behave unsafely, or distribute malware further down the software supply chain.
How Retrieval Augmented Generation (RAG) can help prevent AI data poisoning attacks
Many AI models, including those from OpenAI, are trained on vast internet data sets, posing challenges in verifying and authenticating the data. To address this, experts suggest integrating Refined Access Guidance (RAG) into AI models. While not all models support RAG, it can safeguard organizations from AI model poisoning by providing tailored context atop the base Language Model (LLM).
Instead of relying solely on broad model outputs, RAG furnishes refined information, such as business-specific data, reducing the risk of AI data poisoning and generating more coherent content. As AI models are built on extensive data, understanding their creation pipeline is already complex. Handling compromised data or “forgetting” information is costly and time-consuming, impacting performance.
How can machine identity management help prevent AI data poisoning attacks?
By building a solid foundation through machine identity management, security leaders can ensure that AI data poisoning attacks don’t have the opportunity to wreak havoc on an organization’s AI technologies, systems or customers’ systems. Examples include:
- When using third-party AI models, treat them like any third-party software: authenticate access and evaluate thoroughly before deployment.
- Ensure robust authentication for AI and non-AI APIs, connecting only trusted APIs and enabling blocking of suspicious requests.
- Implement secure code signing to prevent unauthorized executions. Maintain end-to-end security and traceability of AI model origins.
- Adopt a centralized, unified control plane for machine identity management. With a control plane, security leaders can discover, monitor and automate the orchestration of all types of machine identities across all environments and teams, making it easy to see which AI models can be trusted.
The proliferation of AI/ML tools, and their enormous data training sets (often with uncertain origins), opens the door for new types of software supply chain threats, including the poisoning of AI training data. To safely capitalize on AI technology, companies need to manage all types of machine identities, including TLS/SSL, code signing, mTLS, SPIFFE, SSH and others. By taking the steps above, organizations will be better prepared to safeguard against growing AI toolsets and risks.