CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◬ AI & Machine Learning Apr 20, 2026

Too Private to Tell: Practical Token Theft Attacks on Apple Intelligence

arXiv Security Archived Apr 20, 2026 ✓ Full text saved

arXiv:2604.15637v1 Announce Type: new Abstract: Apple Intelligence is a generative AI (GenAI) service provided by Apple on its devices. While offering a similar set of features as other similar GenAI services, Apple Intelligence is claimed to be designed with an extra focus on user security and privacy through a two-stage authentication and authorization design using anonymous access tokens. In this paper, we present our investigation into this token issuance mechanism with a goal to reveal poss

Full text archived locally
✦ AI Summary · Claude Sonnet


    Computer Science > Cryptography and Security [Submitted on 17 Apr 2026] Too Private to Tell: Practical Token Theft Attacks on Apple Intelligence Haoling Zhou (1), Shixuan Zhao (1), Chao Wang (1), Zhiqiang Lin (1) ((1) The Ohio State University) Apple Intelligence is a generative AI (GenAI) service provided by Apple on its devices. While offering a similar set of features as other similar GenAI services, Apple Intelligence is claimed to be designed with an extra focus on user security and privacy through a two-stage authentication and authorization design using anonymous access tokens. In this paper, we present our investigation into this token issuance mechanism with a goal to reveal possible vulnerabilities using traffic analysis, reverse engineering, and cross comparison with Apple's public documentation. Specifically, we present the Serpent attack, the first practical cross-device token replay attack against Apple Intelligence that allows the attacker to steal the access tokens from the victim's device and utilise them on a different device, with all usage rate-limited against the victim. We have achieved successful attacks on the latest macOS 26 Tahoe and demonstrated that an attacker, who even has used up its own allowance, can immediately regain access to Apple Intelligence service. We have responsibly disclosed the vulnerabilities to the vendors and received confirmation from Apple with CVE assigned and bounty given. Our results highlight a general lesson for built-in AI services: Anonymising identity does not by itself make the AI service secure; Enforcing non-transferability requires cryptographic binding to the rightful user. Subjects: Cryptography and Security (cs.CR) Cite as: arXiv:2604.15637 [cs.CR]   (or arXiv:2604.15637v1 [cs.CR] for this version)   https://doi.org/10.48550/arXiv.2604.15637 Focus to learn more Submission history From: Haoling Zhou [view email] [v1] Fri, 17 Apr 2026 02:32:29 UTC (789 KB) Access Paper: HTML (experimental) view license Current browse context: cs.CR < prev   |   next > new | recent | 2026-04 Change to browse by: cs References & Citations NASA ADS Google Scholar Semantic Scholar Export BibTeX Citation Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Demos Related Papers About arXivLabs Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
    💬 Team Notes
    Article Info
    Source
    arXiv Security
    Category
    ◬ AI & Machine Learning
    Published
    Apr 20, 2026
    Archived
    Apr 20, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗