What OpenClaw reveals about agentic AI security risks - IBM
IBMArchived May 14, 2026✓ Full text saved
What OpenClaw reveals about agentic AI security risks IBM
Full text archived locally
✦ AI Summary· Claude Sonnet
Subscribe
Security Artificial Intelligence
What OpenClaw reveals about agentic AI security risks
By Sandra Hill , Christopher Ristig
Published 23 April 2026
Updated 24 April 2026
This was authored by Chris Ristig and Sandra Hill with special thanks to Adam Brown, CISO Threat Intelligence, and Jeff Kuo, X-Force Vulnerability Intelligence, for their contributions made to this article.
AI is reshaping the cybersecurity landscape in real time. Traditional AI is reactive and it provides insights, answers questions and supports decisions, but only in response to human prompts. Agentic AI represents a dramatic shift. These systems operate autonomously, make decisions, pursue goals and only involve humans when necessary. According to Deloitte, roughly one quarter of organizations are now exploring or piloting autonomous AI agents, marking an early but meaningful shift beyond prompt‑driven generative AI.
This rapid adoption has come with a downside as vulnerabilities are growing just as quickly. Publicly reported security flaws, Common Vulnerabilities and Exposures, continue to rise year over year. Approximately 15,000 vulnerabilities have been disclosed so far in 2026. Of these, dozens have been explicitly identified as impacting AI systems or AI‑generated code. The weaponization and exploitation of AI systems became especially visible in late 2025, and the trend has only accelerated.
Agentic AI has dramatically expanded the attack surface. These agents blend UI control panels, messaging integrations, browser automation, SSH tooling, containerized execution, file system access and an LLM coordinating it all. In other words, they touch nearly every layer of a system. A leaked token or a spoofed packet can quickly escalate into full operator‑level compromise. Their broad permissions make AI agents extremely attractive targets for attackers.
OpenClaw: A powerful tool with potential risks
A prime example of the exploitation of agentic AI is OpenClaw (formerly ClawdBot or MoltBot), a self‑hosted, autonomous AI agent capable of browsing the web, managing files, and reading, writing and executing code locally. It runs directly on a user’s machine and can chain together multiple skills to complete complex tasks. Because it’s open‑source, it’s highly customizable and freely accessible.
OpenClaw didn’t just gain traction, it exploded. Just weeks after launch, it became GitHub’s most‑starred repository, drawing a massive developer community and immediate researcher attention.
But with that popularity comes scrutiny. Many users don’t fully understand the security and privacy implications of running a system with this level of autonomy and access. Security researchers have warned that OpenClaw presents a “lethal trifecta” of risks:
Deep access to private local data
Interaction with untrusted external content
The ability to communicate outward
It’s not surprising that OpenClaw has already published over 255 GitHub Security Advisories. Many of the issues are tied to command execution, leaked plaintext API keys and credentials, which can be stolen by threat actors via indirect prompt injection, malicious skills or unsecured endpoints.
Figure 1 — Distribution of identified OpenClaw weaknesses categorized by severity level, highlighting the relative risk and potential impact across AI systems.
Figure 2 — Classification of identified OpenClaw weaknesses by attack and exposure type, illustrating how different flaw categories contribute to AI system risk.
Indirect prompt injection: The “ClawJacked” case
OpenClaw is vulnerable to indirect prompt injection attacks, where attackers hide malicious instructions inside data that the agent is expected to process. If the agent interprets these hidden instructions as legitimate, it may leak data or perform sensitive actions.
This technique was behind “ClawJacked,” a vulnerability that allowed malicious websites to brute‑force and hijack locally running OpenClaw instances. Researchers at Oasis Security discovered the flaw, which enabled attackers to silently exfiltrate data by abusing the agent’s built‑in autonomy. OpenClaw patched the issue in version 2026.2.26, released on February 26.
ClawHub and the ClawHavoc malware campaign
The security challenges extend beyond vulnerabilities in the core platform. The “ClawHub” repository, a community hub for sharing OpenClaw skills, has been abused to distribute malicious packages disguised as trading bots, utilities or development helpers. Once installed, these skills can deploy information‑stealing malware directly onto a user’s machine.
In early 2026, investigators uncovered ClawHavoc, a large‑scale supply‑chain malware campaign targeting OpenClaw users. Attackers uploaded over 1,100 malicious skills to ClawHub, many masquerading as productivity, crypto or coding tools. One attacker, hightower6eu, uploaded dozens of nearly identical malicious skills. Several of these skills became some of the most‑downloaded packages on the platform. This attack made it clear that the OpenClaw skill ecosystem is now a target-rich environment for threat actors.
A vulnerability disclosure system under strain
Agentic AI is growing fast and the volume of vulnerabilities is outpacing traditional tracking. The number of OpenClaw disclosures is moving faster than the CVE assignment process can keep up, leaving many vulnerabilities without CVE identifiers.
This is more than an administrative problem. Most patch management tools, compliance frameworks and enterprise security systems rely heavily on CVE IDs to surface risks and track remediation. When vulnerabilities aren’t assigned CVEs, they may not appear in dashboards, scanners or automated reports. This is effectively making them invisible to many organizations.
The vulnerability disclosure landscape is starting to show its limits, and agentic AI systems like OpenClaw are exposing just how unprepared we are for this emerging class of security issues. We’re running head‑first into a new class of security problems, and the ecosystem simply wasn’t built for it. The traditional CVE assignment and enrichment process is working to adapt and catch up, but organizations can’t afford to wait for formal updates before responding. The traditional CVE tracking system was built for discrete well-defined software flaws, not autonomous systems capable of taking actions, browsing external content and chaining tools to complete tasks. As a result, many meaningful AI security failures emerge first as independent research writeups, vendor advisories or odd behavioral inconsistencies rather than well‑labeled vulnerabilities.
In the short term, organizations need to start treating agentic AI weaknesses as system‑level risks, not just “missing CVE entries.” This means expanding monitoring beyond CVE feeds, strengthening architectural controls such as permission scoping and action auditing, and recognizing that exploitation may occur before any formal disclosure is published. Until industry standards evolve to properly account for AI‑driven systems, resilience will depend on early signal detection, rapid containment and an acknowledgment that AI vulnerabilities are no longer a future problem. They are already present in production environments and attackers are not waiting for the rest of the ecosystem to catch up.
IBM X-Force Premier Threat Intelligence is now integrated with OpenCTI by Filigran, delivering actionable threat intelligence about this threat activity and more. Access insights on threat actors, malware, and industry risks. Install the X-Force OpenCTI Connector to enhance detection and response, strengthening your cybersecurity with IBM X-Force’s expertise. Get a 30-Day X-Force Premier Threat Intelligence trial today.
The latest tech news, backed by expert insights
Stay up to date on the most important—and intriguing—industry trends on AI, automation, data and beyond with the Think newsletter. See the IBM Privacy Statement.
First name*
Last name*
Business email*
Your subscription will be delivered in English. You will find an unsubscribe link in every newsletter. Refer to our IBM Privacy Statement for more information.
Subscribe
Sandra Hill
Manager, Vulnerability Intelligence
Christopher Ristig
Senior Patch Advisory Specialist
Webinar recording
Achieve continuous compliance in a hybrid data world with IBM Guardium Data Protection
Register for this webinar to learn how AI governance helps organizations manage risk, meet evolving regulations and build trusted, responsible AI at scale.
Register now
Resources
NEW
Smarter AI governance and security solutions
Learn how to turn governance and security into drivers of resilience, smarter decision-making and confident growth with practical strategies from this buyer’s guide.
Get the guide
TII report
IBM X-Force Threat Intelligence Index 2026
Gain insights to prepare and respond to cyberattacks with greater speed and effectiveness with the IBM X-Force® Threat Intelligence Index.
Read the report
Cybersecurity guide
Cybersecurity in the era of generative AI
Learn how today’s security landscape is changing and how to navigate the challenges and tap into the resilience of generative AI.
Read the guide
KuppingerCole report
See why KuppingerCole ranks IBM as a leader
The KuppingerCole data security platforms report offers guidance and recommendations to find sensitive data protection and governance products that best meet clients’ needs.
Read the report
TEI report
The total economic impact (TEI) of Guardium Data Protection
Discover the benefits and ROI of IBM Guardium® Data Protection in this Forrester TEI study.
Read the report
On-demand webinars
Guardium® webinars
Learn how to protect your data across its lifecycle from our webinars.
Explore on-demand webinars
Gartner Market Guide
Gartner® Market Guide for AI TRiSM
Access this Gartner guide to learn how to manage the complete AI inventory and secure your AI workloads with guardrails. It also shows how to reduce risk and manage the governance process to achieve AI trust for all AI use cases in your organization.
Read the guide
Security tutorials
Expand your skills with free security tutorials
Follow clear steps to complete tasks and learn how to effectively use technologies in your projects.
Explore tutorials
IAM explainer
What is identity and access management (IAM)?
Identity and access management (IAM) is a cybersecurity discipline that deals with user access and resource permissions.
Read the article
IBM Guardium
Detect and respond to threats, gain real-time visibility and enforce security and compliance across your data estate.
Explore IBM Guardium®
AI cybersecurity solutions
Improve the speed, accuracy and productivity of security teams with AI-powered solutions.
Explore AI cybersecurity solutions
Security services
Transform your business and manage risk with a global leader in cybersecurity, cloud and managed security services.
Explore security services
Take the next step
Accelerate threat detection and response with AI-powered insights while protecting critical data with real-time visibility, threat detection and automated security controls.
Discover IBM Guardium®
Explore AI cybersecurity solutions
Products
Consulting services
Industries
Case studies
Financing
Research
LinkedIn
X
Instagram
YouTube
Podcasts
Business partners
Documentation
Events
Newsletters
Support
TechXchange community
Overview
Careers
Investor relations
Leadership
Newsroom
Security, privacy and trust
Contact IBM
Privacy
Terms of use
Accessibility
ibm.com, ibm.org, ibm-zcouncil.com, insights-on-business.com, jazz.net, mobilebusinessinsights.com, promontory.com, proveit.com, ptech.org, s81c.com, securityintelligence.com, skillsbuild.org, softlayer.com, storagecommunity.org, think-exchange.com, thoughtsoncloud.com, alphaevents.webcasts.com, ibm-cloud.github.io, ibmbigdatahub.com, bluemix.net, mybluemix.net, ibm.net, ibmcloud.com, galasa.dev, blueworkslive.com, swiss-quantum.ch, blueworkslive.com, cloudant.com, ibm.ie, ibm.fr, ibm.com.br, ibm.co, ibm.ca, community.watsonanalytics.com, datapower.com, skills.yourlearning.ibm.com, bluewolf.com, carbondesignsystem.com, openliberty.io