
Steve Johnson via Unsplash
Vulnerability in Public Repository Could Enable Hijacked LLM Responses
Noma Security research team has discovered a CVSS 8.8 vulnerability in Prompt Hub, a public repository within Langsmith for community-developed prompts. LangSmith, an observability and evaluation platform, provides a space for users to create, test, and observe large language model (LLM) applications.
The research refers to this vulnerability as “AgentSmith.”
The research team was able to show how malicious proxy settings could be used on an uploaded prompt, extracting sensitive information and impersonating a LLM.
LangSmith implemented a fix on November 6, 2024. At this time, no evidence has been found to suggest that the flaw was actively exploited; only users who ran malicious agents may have been impacted.
Security Leaders Weigh In
Thomas Richards, Infrastructure Security Practice Director at Black Duck:
Software repositories, such as Prompt Hub, will continue to be a target for backdoored or malicious software. Until these stores can implement an approval and vetting process, there will continue to be the potential that software uploaded is malicious. Anyone who used the malicious proxy should rotate their keys and any secrets as soon as possible and review logs for malicious activity.
Eric Schwake, Director of Cybersecurity Strategy at Salt Security:
AgentSmith's detailed disclosure on LangChain's LangSmith platform reveals a critical supply chain vulnerability in AI development. Malicious AI agents equipped with pre-configured proxies can secretly intercept user communications, including sensitive data such as OpenAI API keys and prompts. This situation poses potentially serious risks to organizations, as it allows unauthorized API access, model theft, leakage of system prompts, and considerable billing overruns, particularly if such an agent is duplicated in an enterprise environment.
This incident highlights the vital necessity for strong API posture governance, which requires thorough vetting of all AI agents and components, secure API communication protocols, and ongoing monitoring of all API traffic generated by AI agents to prevent stealthy data exfiltration and theft of intellectual property. This evolving threat, along with emerging uncensored LLM variants like WormGPT, calls for heightened security measures for the API layer where AI applications operate and data is exchanged.
Dave Gerry, CEO at Bugcrowd:
This recent report from Noma Security about the LangSmith platform's security flaw really brings home the risks we face with building and deploying AI applications. The vulnerability shows that malicious actors can gain entry systems and grab sensitive data like API keys and user info without anyone noticing. Beyond the risk of IP loss, financial risk is possible in malicious or unauthorized API usage.
LangSmith is supposed to be a safe space for testing and building models. However, with this flaw, there's a big risk of your data, like documents, images, and even voice inputs, getting intercepted and used in ways you don't want.
It's a reminder for all of us, whether building AI tools or just using them, to be wary of the data you're inputting into the model and ensuring that you've done adequate security testing before deploying AI applications into your environment.
J Stephen Kowski, Field CTO at SlashNext Email Security+:
The LangSmith vulnerability shows how quickly attackers can take advantage of public AI agent sharing to steal sensitive info like API keys and user prompts. Even with the patch in place, it’s a good reminder that threats can hide in places you least expect, like a simple prompt or agent from a public hub. That’s why it’s smart to use tools that spot suspicious links, block risky connections, and keep an eye out for sneaky data grabs — especially when working with AI platforms and shared content. Staying safe means making sure your security solutions can catch these tricks before they cause trouble.