Survival of the Cheapest: Cost-Aware Hardware Adaptation for Adversarial Robustness
arXiv SecurityArchived Apr 23, 2026✓ Full text saved
arXiv:2409.07609v2 Announce Type: replace Abstract: Deploying adversarially robust machine learning systems requires continuous trade-offs between robustness, cost, and latency. We present an autonomic decision-support framework providing a quantitative foundation for adaptive hardware selection and hyper-parameter tuning in cloud-native deep learning. The framework applies accelerated failure time (AFT) models to quantify the effect of hardware choice, batch size, epochs, and validation accurac
Full text archived locally
✦ AI Summary· Claude Sonnet
Computer Science > Cryptography and Security
[Submitted on 11 Sep 2024 (v1), last revised 22 Apr 2026 (this version, v2)]
Survival of the Cheapest: Cost-Aware Hardware Adaptation for Adversarial Robustness
Charles Meyers, Mohammad Reza Saleh Sedghpour, Tommy Löfstedt, Erik Elmroth
Deploying adversarially robust machine learning systems requires continuous trade-offs between robustness, cost, and latency. We present an autonomic decision-support framework providing a quantitative foundation for adaptive hardware selection and hyper-parameter tuning in cloud-native deep learning. The framework applies accelerated failure time (AFT) models to quantify the effect of hardware choice, batch size, epochs, and validation accuracy on model survival time. This framework can be naturally integrated into an autonomic control loop (monitor--analyse--plan--execute, MAPE-K), where system metrics such as cost, robustness, and latency are continuously evaluated and used to adapt model configurations and hardware selection. Experiments across three GPU architectures confirm the framework is both sound and cost-effective: the Nvidia L4 yields a 20% increase in adversarial survival time while costing 75% less than the V100, demonstrating that expensive hardware does not necessarily improve robustness. The analysis further reveals that model inference latency is a stronger predictor of adversarial robustness than training time or hardware configuration.
Subjects: Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Applications (stat.AP)
Cite as: arXiv:2409.07609 [cs.CR]
(or arXiv:2409.07609v2 [cs.CR] for this version)
https://doi.org/10.48550/arXiv.2409.07609
Focus to learn more
Submission history
From: Charles Meyers [view email]
[v1] Wed, 11 Sep 2024 20:43:59 UTC (499 KB)
[v2] Wed, 22 Apr 2026 17:36:44 UTC (408 KB)
Access Paper:
view license
Current browse context:
cs.CR
< prev | next >
new | recent | 2024-09
Change to browse by:
cs
cs.CV
cs.LG
stat
stat.AP
References & Citations
NASA ADS
Google Scholar
Semantic Scholar
Export BibTeX Citation
Bookmark
Bibliographic Tools
Bibliographic and Citation Tools
Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media
Demos
Related Papers
About arXivLabs
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)