Critical Unpatched Flaw Leaves Hugging Face LeRobot Open to Unauthenticated RCE
The Hacker NewsArchived Apr 29, 2026✓ Full text saved
Cybersecurity researchers have disclosed details of a critical security flaw impacting LeRobot, Hugging Face's open-source robotics platform with nearly 24,000 GitHub stars, that could be exploited to achieve remote code execution. The vulnerability in question is CVE-2026-25874 (CVSS score: 9.3), which has been described as a case of untrusted data deserialization stemming from the use of the
Full text archived locally
✦ AI Summary· Claude Sonnet
Critical Unpatched Flaw Leaves Hugging Face LeRobot Open to Unauthenticated RCE
Ravie LakshmananApr 28, 2026Vulnerability / Network Security
Cybersecurity researchers have disclosed details of a critical security flaw impacting LeRobot, Hugging Face's open-source robotics platform with nearly 24,000 GitHub stars, that could be exploited to achieve remote code execution.
The vulnerability in question is CVE-2026-25874 (CVSS score: 9.3), which has been described as a case of untrusted data deserialization stemming from the use of the unsafe pickle format.
"LeRobot contains an unsafe deserialization vulnerability in the async inference pipeline, where pickle.loads() is used to deserialize data received over unauthenticated gRPC channels without TLS in the policy server and robot client components," according to a GitHub advisory for the flaw.
"An unauthenticated network-reachable attacker can achieve arbitrary code execution on the server or client by sending a crafted pickle payload through the SendPolicyInstructions, SendObservations, or GetActions gRPC calls."
According to Resecurity, the problem is rooted in the async inference PolicyServer component, allowing an unauthenticated attacker who can reach the PolicyServer network port to send a malicious serialized payload and run arbitrary operating system commands on the host machine running the service.
The cybersecurity company said the vulnerability is "dangerous" as the service is designed for artificial intelligence inference systems, which tend to run with elevated privileges to access internal networks, datasets, and expensive compute resources. Should the flaw be exploited by an attacker, it could enable a wide range of actions, including -
Unauthenticated remote code execution
Complete compromise of the PolicyServer host
Impact connected robots
Theft of sensitive data, such as API keys, SSH credentials, and model files
Move laterally across the network
Crash services, corrupt models, or sabotage operations, leading to physical safety risks
VulnCheck security researcher Valentin Lobstein, who discovered and published additional details of the shortcoming last week, said it has been successfully validated against LeRobot version 0.4.3. The issue currently remains unpatched, with a fix planned in version 0.6.0.
Interestingly, the same flaw was independently reported by another researcher who goes by the online alias "chenpinji" sometime in December 2025. The LeRobot team responded earlier this January, acknowledging the security risk and noting "that part of the codebase needs to be almost entirely refactored as its original implementation was more experimental."
"That said, LeRobot has so far been primarily a research and prototyping tool, which is why deployment security hasn't been a strong focus until now," Steven Palma, tech lead of the project, said. "As LeRobot continues to be adopted and deployed in production, we’ll start paying much closer attention to these kinds of issues. Fortunately, being an open-source project, the community can also help by reporting and fixing vulnerabilities."
The findings once again expose the dangers of using the pickle format, as it paves the way for arbitrary code execution attacks simply by loading a specially crafted file.
"The irony here is hard to overstate," Lobstein noted. "Hugging Face created Safetensors -- a serialization format designed specifically because pickle is dangerous for ML data. And yet their own robotics framework deserializes attacker-controlled network input with pickle.loads(), with # nosec comments to silence the tool that was trying to warn them."
Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.
SHARE
Tweet
Share
Share
SHARE
artificial intelligence, cybersecurity, data protection, network security, Open Source, remote code execution, Threat Research, Vulnerability
Trending News
108 Malicious Chrome Extensions Steal Google and Telegram Data, Affecting 20,000 Users
Three Microsoft Defender Zero-Days Actively Exploited; Two Still Unpatched
Anthropic MCP Design Vulnerability Enables RCE, Threatening AI Supply Chain
Your MTTD Looks Great. Your Post-Alert Gap Doesn't
Why Security Leaders Are Layering Email Defense on Top of Secure Email Gateways
Microsoft Issues Patches for SharePoint Zero-Day and 168 Other New Vulnerabilities
New PHP Composer Flaws Enable Arbitrary Command Execution — Patches Released
The Hidden Security Risks of Shadow AI in Enterprises
Why Threat Intelligence Is the Missing Link in CTEM Prioritization and Validation
Mirax Android RAT Turns Devices into SOCKS5 Proxies, Reaching 220,000 via Meta Ads
Apache ActiveMQ CVE-2026-34197 Added to CISA KEV Amid Active Exploitation
Cisco Patches Four Critical Identity Services, Webex Flaws Enabling Code Execution
Actively Exploited nginx-ui Flaw (CVE-2026-33032) Enables Full Nginx Server Takeover
n8n Webhooks Abused Since October 2025 to Deliver Malware via Phishing Emails
OpenAI Launches GPT-5.4-Cyber with Expanded Access for Security Teams
Vercel Breach Tied to Context AI Hack Exposes Limited Customer Credentials
Load More ▼
Popular Resources
Automate Alert Triage and Investigations Across Every Threat
Discover Key AI Security Gaps CISOs Face in 2026
How to Identify Risky Browser Extensions in Your Organization
Fix Rising Application Security Risks Driven by AI Development