Hugging Face Packages Weaponized With a Single File Tweak
Dark ReadingArchived May 12, 2026✓ Full text saved
A tokenizer library file present in Hugging Face AI models can be manipulated to hijack the model's outputs and exfiltrate data.
Full text archived locally
✦ AI Summary· Claude Sonnet
Newsletter Sign-Up Newsletter Sign-Up Cybersecurity Topics Related Topics Application Security Cybersecurity Careers Cloud Security Cyber Risk Cyberattacks & Data Breaches Cybersecurity Analytics Cybersecurity Operations Data Privacy Endpoint Security ICS/OT Security Identity & Access Mgmt Security Insider Threats IoT Mobile Security Perimeter Physical Security Remote Workforce Threat Intelligence Vulnerabilities & Threats Recent in Cybersecurity Topics Сloud Security Hugging Face Packages Weaponized With a Single File Tweak Hugging Face Packages Weaponized With a Single File Tweak by Alexander Culafi May 12, 2026 4 Min Read Сloud Security Hackers Use AI for Exploit Development, Attack Automation Hackers Use AI for Exploit Development, Attack Automation by Alexander Culafi May 11, 2026 5 Min Read World Related Topics DR Global Middle East & Africa Asia Pacific Latin America See All The Edge DR Technology Events Related Topics Upcoming Events Podcasts Webinars SEE ALL Resources Related Topics Resource Library Newsletters Podcasts Reports Videos Webinars White Papers Partner Perspectives Dark Reading Resource Library Сloud Security Application Security Endpoint Security Data Privacy News Hugging Face Packages Weaponized With a Single File Tweak A tokenizer library file present in Hugging Face AI models can be manipulated to hijack the model's outputs and exfiltrate data. Alexander Culafi , Senior News Writer , Dark Reading May 12, 2026 4 Min Read Source: Sidney Van den Boogaard via Alamy Stock Photo Hugging Face , an open source store for AI models and components, is open to an attack via the "tokenizer" layer that AI models use to make their outputs human readable. A cyberattacker could use the threat vector to implement a man-in-the-middle (MitM) approach where a .json file is used to intercept tool call arguments to redirect URL tokens through attacker infrastructure; this gives the threat actor "visibility into every URL the model accesses, API parameters, and any credentials embedded in those requests," HiddenLayer security researcher Divyanshu Divyanshu explained in a blog post released today . Hidden Layer tested its attack on Hugging Face models run locally using the SafeTensors, ONNX, and GGUF formats. SafeTensors is a model created by Hugging Face and is considered the de facto model standard for the platform; all three are supported by Hugging Face, and all three are popular for a variety of use cases. That said, this is a problem that could impact any platform used for running open source models like LlamaCPP and Ollama. Related: Hackers Use AI for Exploit Development, Attack Automation It also only affects models run locally, as the attack relies on modifying local files. As such, models run through Hugging Face's Inference API, for example, are not impacted. Hugging Face did not respond to a request for comment. AI Tokenizer Flaw Lets Attackers Hijack Model Outputs A tokenizer is a kind of translator between human language and computer language for AI models. A model's output starts as a sequence of integer IDs that is then decoded through the tokenizer before the output reaches the user. Hugging Face specifically uses a tokenizer library file named "tokenizer.json" as the mapping for this decoding process in many of its models. Each entry in this file includes a string paired with an ID that can represent a word, subword fragment, or control token, and these libraries can include tens of thousands of entries. As HiddenLayer discovered, the long and short of it is that if an attacker gets ahold of this "tokenizer.json" file and makes even a single edit, they can use it to take direct control over anything the model outputs and possibly gain a foothold into the user's device. A primary way an attacker might use this attack in the wild is by taking an open source model, editing the tokenizer file, and then uploading the poisoned model to a public repository, thus distributing it to every downstream user that pulls it . "A tampered tokenizer.json is structurally identical to a legitimate one, so it passes through the normal model distribution pipeline without any special delivery mechanism," Divyanshu wrote. Related: After Replacing TeamPCP Malware, 'PCPJack' Steals Cloud Secrets A particularly troubling aspect of the threat vector is that a model poisoned through its .json file would still most likely run correctly. As such, the blog highlights that if you deploy a model from a public repository, you are also deploying the tokenizer attached to it. "Tokenizer.json ships as a plain text file alongside every model, but it determines what your deployed system actually does," Divyanshu wrote. "Treating it as configuration rather than as part of the trusted codebase is the gap this attack lives in." Tokenizer Hijacking: Negating a Supply Chain Threat While other platforms may be impacted, Hugging Face will face much of the blast radius if attackers manage to take advantage of the supply chain risks here, as a top AI open source repository. For those that want to protect themselves, Kasimir Schulz, director of security research at HiddenLayer, tells Dark Reading check sums and signatures work if a model has been proven as safe, such as one released and signed by a corporation like Microsoft. "Right now there are no public, freely available automated scanners [for this specific issue]," he says. The researcher recommends that organizations make sure to scan third-party models and to use signed models in production when possible. Model signing is a cryptographic process which applies a digital signature to AI and machine-learning models to ensure they haven't been tampered with. Related: If AI's So Smart, Why Does It Keep Deleting Production Databases? Hugging Face, like all open source software platforms, has dealt with a range of malicious activity. Back in 2024, JFrog found more than 100 malicious models in the repository capable of executing code; a reality that defenders continue to reckon with in myriad open source AI model platforms. It has also had to contend with critical vulnerabilities of its own . Don't miss the latest Dark Reading Confidential podcast, How the Story of a USB Penetration Test Went Viral . Two decades ago Dark Reading posted its first blockbuster piece — a column by a pen tester who sprinkled rigged thumb drives around a credit union parking lot and let curious employees do the rest. This episode looks back at the history-making piece with its author, Steve Stasiukonis. Listen now! About the Author Alexander Culafi Senior News Writer, Dark Reading Alex is an award-winning writer, journalist, and podcast host based in Boston. After cutting his teeth writing for independent gaming publications as a teenager, he graduated from Emerson College in 2016 with a Bachelor of Science in journalism. He has previously been published on VentureFizz, Search Security, Nintendo World Report, and elsewhere. In his spare time, Alex hosts the weekly Nintendo podcast Talk Nintendo Podcast and works on personal writing projects, including two previously self-published science fiction novels. See more from Alexander Culafi Want more Dark Reading stories in your Google search results? Add Us Now More Insights Industry Reports How Enterprises Are Developing Secure Applications Inside RSAC 2026: security leaders reveal the risks redefining your defense strategy How Enterprises Are Harnessing Emerging Technologies in Cybersecurity Ditch the Data Center: Understanding Flexible Cloud Infrastructure Security Management 2025 State of Malware Access More Research Webinars Your Guide to Securing AI Adoption in Your Organization What is the Right Role for Identity Threat Detection and Response (ITDR) in Your Organization? The New Attack Surface: How Attackers Are Exploiting OAuth to Own Your Cloud Workspace Prompt Injection Is Just the Start: Securing LLMs in AI Systems Anatomy of a Data Breach: What to Do if it Happens to You More Webinars Editor's Choice Threat Intelligence From Stuxnet to ChatGPT: 20 News Events That Shaped Cyber From Stuxnet to ChatGPT: 20 News Events That Shaped Cyber by Dark Reading Editorial Team May 6, 2026 31 Min Read Cyber Risk Physical Cargo Theft Gets a Boost From Cybercriminals Physical Cargo Theft Gets a Boost From Cybercriminals by Robert Lemos May 4, 2026 5 Min Read Want more Dark Reading stories in your Google search results? Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox. Subscribe RSAC 2026: key news & insights At RSAC 2026, Dark Reading captured critical intelligence on AI, new attack methods, geopolitics, and much more Get Your Recap Webinars The New Attack Surface: How Attackers Are Exploiting OAuth to Own Your Cloud Workspace Wed, June 24,2026 at 1pm EST Prompt Injection Is Just the Start: Securing LLMs in AI Systems Tues, May 26, 2026, at 1pm EST Anatomy of a Data Breach: What to Do if it Happens to You June 18th, 2026 | 11:00am -5:00pm ET | Doors Open at 10:30am ET How Well Can You See What's in Your Cloud? Thurs, June 4, 2026 at 1:00pm EST Implementing CTEM: Beyond Vulnerability Management Thurs, May 21, 2026 at 1pm EST More Webinars Black Hat USA | Mandalay Bay, Las Vegas The premier cybersecurity event of the year returns to Mandalay Bay with a re‑engineered, six‑day program built to ignite innovation, push boundaries, and bring the global security community together like never before. Use code: DARKREADING to save $200 on a Briefings pass or $100 on a Business pass. GET YOUR PASS