Taint-Style Vulnerability Detection and Confirmation for Node.js Packages Using LLM Agent Reasoning
arXiv SecurityArchived Apr 23, 2026✓ Full text saved
arXiv:2604.20179v1 Announce Type: new Abstract: The rapidly evolving Node$.$js ecosystem currently includes millions of packages and is a critical part of modern software supply chains, making vulnerability detection of Node$.$js packages increasingly important. However, traditional program analysis struggles in this setting because of dynamic JavaScript features and the large number of package dependencies. Recent advances in large language models (LLMs) and the emerging paradigm of LLM-based a
Full text archived locally
✦ AI Summary· Claude Sonnet
Computer Science > Cryptography and Security
[Submitted on 22 Apr 2026]
Taint-Style Vulnerability Detection and Confirmation for Node.js Packages Using LLM Agent Reasoning
Ronghao Ni, Mihai Christodorescu, Limin Jia
The rapidly evolving Node.js ecosystem currently includes millions of packages and is a critical part of modern software supply chains, making vulnerability detection of Node.js packages increasingly important. However, traditional program analysis struggles in this setting because of dynamic JavaScript features and the large number of package dependencies. Recent advances in large language models (LLMs) and the emerging paradigm of LLM-based agents offer an alternative to handcrafted program models. This raises the question of whether an LLM-centric, tool-augmented approach can effectively detect and confirm taint-style vulnerabilities (e.g., arbitrary command injection) in Node.js packages. We implement LLMVD.js, a multi-stage agent pipeline to scan code, propose vulnerabilities, generate proof-of-concept exploits, and validate them through lightweight execution oracles; and systematically evaluate its effectiveness in taint-style vulnerability detection and confirmation in Node.js packages without dedicated static/dynamic analysis engines for path derivation. For packages from public benchmarks, LLMVD.js confirms 84% of the vulnerabilities, compared to less than 22% for prior program analysis tools. It also outperforms a prior LLM-program-analysis hybrid approach while requiring neither vulnerability annotations nor prior vulnerability reports. When evaluated on a set of 260 recently released packages (without vulnerability groundtruth information), traditional tools produce validated exploits for few (\leq 2) packages, while LLMVD.js generates validated exploits for 36 packages.
Comments: 19 pages, 6 figures
Subjects: Cryptography and Security (cs.CR); Artificial Intelligence (cs.AI); Software Engineering (cs.SE)
Cite as: arXiv:2604.20179 [cs.CR]
(or arXiv:2604.20179v1 [cs.CR] for this version)
https://doi.org/10.48550/arXiv.2604.20179
Focus to learn more
Submission history
From: Ronghao Ni [view email]
[v1] Wed, 22 Apr 2026 04:50:48 UTC (302 KB)
Access Paper:
view license
Current browse context:
cs.CR
< prev | next >
new | recent | 2026-04
Change to browse by:
cs
cs.AI
cs.SE
References & Citations
NASA ADS
Google Scholar
Semantic Scholar
Export BibTeX Citation
Bookmark
Bibliographic Tools
Bibliographic and Citation Tools
Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media
Demos
Related Papers
About arXivLabs
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)