CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◬ AI & Machine Learning Apr 17, 2026

No More Guessing: a Verifiable Gradient Inversion Attack in Federated Learning

arXiv Security Archived Apr 17, 2026 ✓ Full text saved

arXiv:2604.15063v1 Announce Type: cross Abstract: Gradient inversion attacks threaten client privacy in federated learning by reconstructing training samples from clients' shared gradients. Gradients aggregate contributions from multiple records and existing attacks may fail to disentangle them, yielding incorrect reconstructions with no intrinsic way to certify success. In vision and language, attackers may fall back on human inspection to judge reconstruction plausibility, but this is far less

Full text archived locally
✦ AI Summary · Claude Sonnet


    Computer Science > Machine Learning [Submitted on 16 Apr 2026] No More Guessing: a Verifiable Gradient Inversion Attack in Federated Learning Francesco Diana, Chuan Xu, André Nusser, Giovanni Neglia Gradient inversion attacks threaten client privacy in federated learning by reconstructing training samples from clients' shared gradients. Gradients aggregate contributions from multiple records and existing attacks may fail to disentangle them, yielding incorrect reconstructions with no intrinsic way to certify success. In vision and language, attackers may fall back on human inspection to judge reconstruction plausibility, but this is far less feasible for numerical tabular records, fueling the impression that tabular data is less vulnerable. We challenge this perception by proposing a verifiable gradient inversion attack (VGIA) that provides an explicit certificate of correctness for reconstructed samples. Our method adopts a geometric view of ReLU leakage: the activation boundary of a fully connected layer defines a hyperplane in input space. VGIA introduces an algebraic, subspace-based verification test that detects when a hyperplane-delimited region contains exactly one record. Once isolation is certified, VGIA recovers the corresponding feature vector analytically and reconstructs the target via a lightweight optimization step. Experiments on tabular benchmarks with large batch sizes demonstrate exact record and target recovery in regimes where existing state-of-the-art attacks either fail or cannot assess reconstruction fidelity. Compared to prior geometric approaches, VGIA allocates hyperplane queries more effectively, yielding faster reconstructions with fewer attack rounds. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR) Cite as: arXiv:2604.15063 [cs.LG]   (or arXiv:2604.15063v1 [cs.LG] for this version)   https://doi.org/10.48550/arXiv.2604.15063 Focus to learn more Submission history From: Francesco Diana [view email] [v1] Thu, 16 Apr 2026 14:28:19 UTC (359 KB) Access Paper: HTML (experimental) view license Current browse context: cs.LG < prev   |   next > new | recent | 2026-04 Change to browse by: cs cs.AI cs.CR References & Citations NASA ADS Google Scholar Semantic Scholar Export BibTeX Citation Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Demos Related Papers About arXivLabs Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
    💬 Team Notes
    Article Info
    Source
    arXiv Security
    Category
    ◬ AI & Machine Learning
    Published
    Apr 17, 2026
    Archived
    Apr 17, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗