Quantum Patches: Enhancing Robustness of Quantum Machine Learning Models
arXiv QuantumArchived Apr 13, 2026✓ Full text saved
arXiv:2604.08827v1 Announce Type: new Abstract: Machine learning models and their applications, such as autonomous driving systems, are becoming increasingly common and are essential components of human daily life. However, due to their sensitivity to perturbed noise, these models are easily susceptible to adversarial attacks. Not only are classical machine learning models affected, but quantum machine learning (QML) models have also been proven to be vulnerable to adversarial attacks, which deg
Full text archived locally
✦ AI Summary· Claude Sonnet
Quantum Physics
[Submitted on 9 Apr 2026]
Quantum Patches: Enhancing Robustness of Quantum Machine Learning Models
Ban Q. Tran, Chuong K. Luong, Viet Q. Nguyen, Duong M. Chu, Susan Mengel
Machine learning models and their applications, such as autonomous driving systems, are becoming increasingly common and are essential components of human daily life. However, due to their sensitivity to perturbed noise, these models are easily susceptible to adversarial attacks. Not only are classical machine learning models affected, but quantum machine learning (QML) models have also been proven to be vulnerable to adversarial attacks, which degrade their performance. To defend against these types of attacks, several classical methods have been proposed. Among these, a prominent approach uses various types of pseudo-noise during training to enhance the model's robustness against real-world attacks. One of the recently emerging solutions is to leverage the unique properties of quantum circuits to create quantum-based pseudo-noise similar to real perturbed noise to counter adversarial attacks. This paper proposes a solution that utilizes random quantum circuits (RQCs) as adversarial data to help QML models overcome these adversarial attacks. The results reported in this paper show that the data generated by RQC actually provides a similar effect to models trained with adversarial data on high-feature datasets. This quantum-based pseudo-noise resulted in a significant reduction in the attack rate in the CIFAR-10 data set, from \textbf{89. 8\%} to \textbf{68.45\%}. For the CINIC-10 dataset, the successful attack rate decreased from \textbf{94.23\%} to \textbf{78.68\%}. This research opens up avenues for applying unique quantum properties, such as superposition, entanglement, and even decoherence, to enhance the quality of machine learning models.
Comments: 12 pages
Subjects: Quantum Physics (quant-ph)
Cite as: arXiv:2604.08827 [quant-ph]
(or arXiv:2604.08827v1 [quant-ph] for this version)
https://doi.org/10.48550/arXiv.2604.08827
Focus to learn more
Submission history
From: Ban Tran [view email]
[v1] Thu, 9 Apr 2026 23:57:28 UTC (3,387 KB)
Access Paper:
HTML (experimental)
view license
Current browse context:
quant-ph
< prev | next >
new | recent | 2026-04
References & Citations
INSPIRE HEP
NASA ADS
Google Scholar
Semantic Scholar
Export BibTeX Citation
Bookmark
Bibliographic Tools
Bibliographic and Citation Tools
Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media
Demos
Related Papers
About arXivLabs
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)