CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◬ AI & Machine Learning Apr 23, 2026

A Data-Free Membership Inference Attack on Federated Learning in Hardware Assurance

arXiv Security Archived Apr 23, 2026 ✓ Full text saved

arXiv:2604.19891v1 Announce Type: new Abstract: Federated Learning (FL) is an emerging solution to the data scarcity problem for training deep learning models in hardware assurance. While FL is designed to enhance privacy by not sharing raw data, it remains vulnerable to Membership Inference Attacks (MIAs) that can leak sensitive intellectual property (IP). Traditional MIAs are often impractical in this domain because they require access to auxiliary datasets that can match the unique statistica

Full text archived locally
✦ AI Summary · Claude Sonnet


    Computer Science > Cryptography and Security [Submitted on 21 Apr 2026] A Data-Free Membership Inference Attack on Federated Learning in Hardware Assurance Gijung Lee, Wavid Bowman, Olivia P. Dizon-Paradis, Reiner N. Dizon-Paradis, Ronald Wilson, Damon L. Woodard, Domenic Forte Federated Learning (FL) is an emerging solution to the data scarcity problem for training deep learning models in hardware assurance. While FL is designed to enhance privacy by not sharing raw data, it remains vulnerable to Membership Inference Attacks (MIAs) that can leak sensitive intellectual property (IP). Traditional MIAs are often impractical in this domain because they require access to auxiliary datasets that can match the unique statistical properties of private data. This paper introduces a novel, data-free MIA targeting image segmentation models in FL for hardware assurance. Our methodology leverages Standard Cell Library Layouts (SCLLs) as priors to guide a gradient inversion attack, allowing an adversary to reconstruct images from a client's intercepted model update without needing any private data. We demonstrate that, by analyzing the reconstruction fidelity, an adversary can infer sensitive hardware characteristics, successfully distinguishing between circuit layers (e.g., metal vs. diffusion) and technology nodes (e.g., 32nm vs. 90nm). Our findings reveal that a novel loss term can conditionally amplify the attack's effectiveness by overcoming evaluation bottlenecks for structurally complex data. This work underscores a significant IP risk, challenging the assumption that FL provides inherent privacy guarantees and proving that severe information leakage can occur even without access to domain-specific datasets. Subjects: Cryptography and Security (cs.CR) Cite as: arXiv:2604.19891 [cs.CR]   (or arXiv:2604.19891v1 [cs.CR] for this version)   https://doi.org/10.48550/arXiv.2604.19891 Focus to learn more Submission history From: Gijung Lee [view email] [v1] Tue, 21 Apr 2026 18:13:12 UTC (4,287 KB) Access Paper: HTML (experimental) view license Current browse context: cs.CR < prev   |   next > new | recent | 2026-04 Change to browse by: cs References & Citations NASA ADS Google Scholar Semantic Scholar Export BibTeX Citation Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Demos Related Papers About arXivLabs Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
    💬 Team Notes
    Article Info
    Source
    arXiv Security
    Category
    ◬ AI & Machine Learning
    Published
    Apr 23, 2026
    Archived
    Apr 23, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗