Intrinsic Fingerprint of LLMs: Continue Training is NOT All You Need to Steal A Model!
arXiv SecurityArchived Apr 27, 2026✓ Full text saved
arXiv:2507.03014v2 Announce Type: replace Abstract: Large language models (LLMs) face significant copyright and intellectual property challenges as the cost of training increases and model reuse becomes prevalent. While watermarking techniques have been proposed to protect model ownership, they may not be robust to continue training and development, posing serious threats to model attribution and copyright protection. This work introduces a simple yet effective approach for robust LLM fingerprin
Full text archived locally
✦ AI Summary· Claude Sonnet
Computer Science > Cryptography and Security
This paper has been withdrawn by arXiv Admin
[Submitted on 2 Jul 2025 (v1), last revised 24 Apr 2026 (this version, v2)]
Intrinsic Fingerprint of LLMs: Continue Training is NOT All You Need to Steal A Model!
Do-hyeon Yoon, Minsoo Chun, Thomas Allen, Hans Müller, Min Wang, Rajesh Sharma
Large language models (LLMs) face significant copyright and intellectual property challenges as the cost of training increases and model reuse becomes prevalent. While watermarking techniques have been proposed to protect model ownership, they may not be robust to continue training and development, posing serious threats to model attribution and copyright protection. This work introduces a simple yet effective approach for robust LLM fingerprinting based on intrinsic model characteristics. We discover that the standard deviation distributions of attention parameter matrices across different layers exhibit distinctive patterns that remain stable even after extensive continued training. These parameter distribution signatures serve as robust fingerprints that can reliably identify model lineage and detect potential copyright infringement. Our experimental validation across multiple model families demonstrates the effectiveness of our method for model authentication. Notably, our investigation uncovers evidence that a recently Pangu Pro MoE model released by Huawei is derived from Qwen-2.5 14B model through upcycling techniques rather than training from scratch, highlighting potential cases of model plagiarism, copyright violation, and information fabrication. These findings underscore the critical importance of developing robust fingerprinting methods for protecting intellectual property in large-scale model development and emphasize that deliberate continued training alone is insufficient to completely obscure model origins.
Comments: arXiv admin note: This paper has been withdrawn by arXiv due to unverifiable authorship and affiliation
Subjects: Cryptography and Security (cs.CR); Computation and Language (cs.CL); Machine Learning (cs.LG)
Cite as: arXiv:2507.03014 [cs.CR]
(or arXiv:2507.03014v2 [cs.CR] for this version)
https://doi.org/10.48550/arXiv.2507.03014
Focus to learn more
Submission history
From: arXiv Admin [view email]
[v1] Wed, 2 Jul 2025 12:29:38 UTC (195 KB)
[v2] Fri, 24 Apr 2026 14:19:43 UTC (1 KB) (withdrawn)
Access Paper:
Withdrawn
Current browse context:
cs.CR
< prev | next >
new | recent | 2025-07
Change to browse by:
cs
cs.CL
cs.LG
References & Citations
NASA ADS
Google Scholar
Semantic Scholar
Export BibTeX Citation
Bookmark
Bibliographic Tools
Bibliographic and Citation Tools
Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media
Demos
Related Papers
About arXivLabs
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)