CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◬ AI & Machine Learning Apr 29, 2026

Variational Autoencoder-Based Black-Box Adversarial Attack on Collaborative DNN Inference

arXiv Security Archived Apr 29, 2026 ✓ Full text saved

arXiv:2508.01107v2 Announce Type: replace Abstract: In recent years, Deep Neural Networks (DNNs) have become increasingly integral to IoT-based environments, enabling realtime visual computing. However, the limited computational capacity of these devices has motivated the adoption of collaborative DNN inference, where the IoT device offloads part of the inference-related computation to a remote server. Such offloading often requires dynamic DNN partitioning information to be exchanged among the

Full text archived locally
✦ AI Summary · Claude Sonnet


    Computer Science > Cryptography and Security [Submitted on 1 Aug 2025 (v1), last revised 27 Apr 2026 (this version, v2)] Variational Autoencoder-Based Black-Box Adversarial Attack on Collaborative DNN Inference Shima Yousefi, Motahare Mounesan, Saptarshi Debroy In recent years, Deep Neural Networks (DNNs) have become increasingly integral to IoT-based environments, enabling realtime visual computing. However, the limited computational capacity of these devices has motivated the adoption of collaborative DNN inference, where the IoT device offloads part of the inference-related computation to a remote server. Such offloading often requires dynamic DNN partitioning information to be exchanged among the participants over an unsecured network or via relays/hops, leading to novel privacy vulnerabilities. In this paper, we propose AdVAR-DNN, an adversarial variational autoencoder (VAE)-based misclassification attack, leveraging classifiers to detect model information and a VAE to generate untraceable manipulated samples, specifically designed to compromise the collaborative inference process. AdVAR-DNN attack uses the sensitive information exchange vulnerability of collaborative DNN inference and is black-box in nature in terms of having no prior knowledge about the DNN model and how it is partitioned. Our evaluation using the most popular object classification DNNs on the CIFAR-100 dataset demonstrates the effectiveness of AdVAR-DNN in terms of high attack success rate with little to no probability of detection. Subjects: Cryptography and Security (cs.CR); Distributed, Parallel, and Cluster Computing (cs.DC) Cite as: arXiv:2508.01107 [cs.CR]   (or arXiv:2508.01107v2 [cs.CR] for this version)   https://doi.org/10.48550/arXiv.2508.01107 Focus to learn more Journal reference: in Proc. IEEE 50th International Conference on Local Computer Networks (LCN), 2025, pp. 1--9 Submission history From: Motahare Mounesan [view email] [v1] Fri, 1 Aug 2025 22:54:25 UTC (7,589 KB) [v2] Mon, 27 Apr 2026 19:34:47 UTC (7,593 KB) Access Paper: HTML (experimental) view license Current browse context: cs.CR < prev   |   next > new | recent | 2025-08 Change to browse by: cs cs.DC References & Citations NASA ADS Google Scholar Semantic Scholar Export BibTeX Citation Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Demos Related Papers About arXivLabs Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
    💬 Team Notes
    Article Info
    Source
    arXiv Security
    Category
    ◬ AI & Machine Learning
    Published
    Apr 29, 2026
    Archived
    Apr 29, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗