CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◬ AI & Machine Learning Apr 24, 2026

Benchmarking the Utility of Privacy-Preserving Cox Regression Under Data-Driven Clipping Bounds: A Multi-Dataset Simulation Study

arXiv Security Archived Apr 24, 2026 ✓ Full text saved

arXiv:2604.21491v1 Announce Type: new Abstract: Differential privacy (DP) is a mathematical framework that guarantees individual privacy; however, systematic evaluation of its impact on statistical utility in survival analyses remains limited. In this study, we systematically evaluated the impact of DP mechanisms (Laplace mechanism and Randomized Response) with data-driven clipping bounds on the Cox proportional hazards model, using 5 clinical datasets ($n = 168$--$6{,}524$), 15 levels of $\vare

Full text archived locally
✦ AI Summary · Claude Sonnet


    Computer Science > Cryptography and Security [Submitted on 23 Apr 2026] Benchmarking the Utility of Privacy-Preserving Cox Regression Under Data-Driven Clipping Bounds: A Multi-Dataset Simulation Study Keita Fukuyama, Yukiko Mori, Tomohiro Kuroda, Hiroaki Kikuchi Differential privacy (DP) is a mathematical framework that guarantees individual privacy; however, systematic evaluation of its impact on statistical utility in survival analyses remains limited. In this study, we systematically evaluated the impact of DP mechanisms (Laplace mechanism and Randomized Response) with data-driven clipping bounds on the Cox proportional hazards model, using 5 clinical datasets (n = 168--6{,}524), 15 levels of \varepsilon (0.1--1000), and B = 1{,}000 Monte Carlo iterations. The data-driven clipping bounds used here are observed min/max and therefore do not provide formal \varepsilon-DP guarantees; the results represent an optimistic lower bound on utility degradation under formal DP. We compared three types of input perturbations (covariates only, all inputs, and the discrete-time model) with output perturbations (dfbeta-based sensitivity), using loss of significance rate (LSR), C-index, and coefficient bias as metrics. At standard DP levels (\varepsilon \leq 1), approximately 90% (90--94%) of the significant covariates lost significance, even in the largest dataset (n = 6{,}524), and the predictive performance approached random levels (test C-index \approx 0.5) under many conditions. Among the input perturbation approaches, perturbing only covariates preserved the risk-set structure and achieved the best recovery, whereas output perturbation (dfbeta-based sensitivity) maintained near-baseline performance at \varepsilon \geq 5. At n \approx 3{,}000, the significance recovered rapidly at \varepsilon = 3--10; however, in practice, \varepsilon \geq 10 (for predictive performance) to \varepsilon \geq 30--60 (for significance preservation) is required. In the moderate-to-high \varepsilon range, false-positive rates increased for variables whose baseline p-values were near the significance threshold. Comments: 11 pages, 6 figures, 5 tables. Supplementary material (5 pages, 2 figures, 3 tables) included as ancillary file. Submission to IEEE Journal of Biomedical and Health Informatics (J-BHI) Subjects: Cryptography and Security (cs.CR); Applications (stat.AP); Methodology (stat.ME) MSC classes: 62N02, 62P10, 68P27 ACM classes: E.3; J.3; G.3 Cite as: arXiv:2604.21491 [cs.CR]   (or arXiv:2604.21491v1 [cs.CR] for this version)   https://doi.org/10.48550/arXiv.2604.21491 Focus to learn more Submission history From: Keita Fukuyama [view email] [v1] Thu, 23 Apr 2026 09:53:15 UTC (537 KB) Access Paper: HTML (experimental) view license Ancillary files (details): supplementary.pdf Current browse context: cs.CR < prev   |   next > new | recent | 2026-04 Change to browse by: cs stat stat.AP stat.ME References & Citations NASA ADS Google Scholar Semantic Scholar Export BibTeX Citation Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Demos Related Papers About arXivLabs Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
    💬 Team Notes
    Article Info
    Source
    arXiv Security
    Category
    ◬ AI & Machine Learning
    Published
    Apr 24, 2026
    Archived
    Apr 24, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗