CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◬ AI & Machine Learning Apr 21, 2026

HarmChip: Evaluating Hardware Security Centric LLM Safety via Jailbreak Benchmarking

arXiv Security Archived Apr 21, 2026 ✓ Full text saved

arXiv:2604.17093v1 Announce Type: new Abstract: The integration of large language models (LLMs) into electronic design automation (EDA) workflows has introduced powerful capabilities for RTL generation, verification, and design optimization, but also raises critical security concerns. Malicious LLM outputs in this domain pose hardware-level threats, including hardware Trojan insertion, side-channel leakage, and intellectual property theft, that are irreversible once fabricated into silicon. Such

Full text archived locally
✦ AI Summary · Claude Sonnet


    Computer Science > Cryptography and Security [Submitted on 18 Apr 2026] HarmChip: Evaluating Hardware Security Centric LLM Safety via Jailbreak Benchmarking Zeng Wang, Minghao Shao, Weimin Fu, Prithwish Basu Roy, Xiaolong Guo, Ramesh Karri, Muhammad Shafique, Johann Knechtel, Ozgur Sinanoglu The integration of large language models (LLMs) into electronic design automation (EDA) workflows has introduced powerful capabilities for RTL generation, verification, and design optimization, but also raises critical security concerns. Malicious LLM outputs in this domain pose hardware-level threats, including hardware Trojan insertion, side-channel leakage, and intellectual property theft, that are irreversible once fabricated into silicon. Such requests often exploit semantic disguise, embedding adversarial intent within legitimate engineering language that existing safety mechanisms, trained on general-purpose hazards, fail to detect. No benchmark exists to evaluate LLM vulnerability to such domain-specific threats. We present the HarmChip benchmark to assess jailbreak susceptibility in hardware security, spanning 16 hardware security domains, 120 threats, and 360 prompts at two difficulty levels. Evaluation of state-of-the-art LLMs reveals an alignment paradox: They refuse legitimate security queries while complying with semantically disguised attacks, exposing blind spots in safety guardrails and underscoring the need for domain-aware safety alignment. Subjects: Cryptography and Security (cs.CR) Cite as: arXiv:2604.17093 [cs.CR]   (or arXiv:2604.17093v1 [cs.CR] for this version)   https://doi.org/10.48550/arXiv.2604.17093 Focus to learn more Submission history From: Zeng Wang [view email] [v1] Sat, 18 Apr 2026 18:09:37 UTC (2,115 KB) Access Paper: HTML (experimental) view license Current browse context: cs.CR < prev   |   next > new | recent | 2026-04 Change to browse by: cs References & Citations NASA ADS Google Scholar Semantic Scholar Export BibTeX Citation Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Demos Related Papers About arXivLabs Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
    💬 Team Notes
    Article Info
    Source
    arXiv Security
    Category
    ◬ AI & Machine Learning
    Published
    Apr 21, 2026
    Archived
    Apr 21, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗