CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◬ AI & Machine Learning May 12, 2026

Improving Parameter-Efficient Federated Learning with Differentially Private Refactorization

arXiv Security Archived May 12, 2026 ✓ Full text saved

arXiv:2605.08443v1 Announce Type: new Abstract: Federated Learning (FL) with parameter-efficient fine-tuning, such as Low-Rank Adaptation (LoRA), enables scalable model training on distributed data. However, when combined with Differential Privacy (DP), LoRA often introduces errors during global aggregation and amplifies the negative effect of DP noise. Existing cross-silo FL approaches mitigate the aggregation error by freezing one LoRA module and applying output perturbation. However, in a res

Full text archived locally
✦ AI Summary · Claude Sonnet


    Computer Science > Cryptography and Security [Submitted on 8 May 2026] Improving Parameter-Efficient Federated Learning with Differentially Private Refactorization Linh Tran, Ana Milanova, Stacy Patterson Federated Learning (FL) with parameter-efficient fine-tuning, such as Low-Rank Adaptation (LoRA), enables scalable model training on distributed data. However, when combined with Differential Privacy (DP), LoRA often introduces errors during global aggregation and amplifies the negative effect of DP noise. Existing cross-silo FL approaches mitigate the aggregation error by freezing one LoRA module and applying output perturbation. However, in a restricted low-rank subspaces, this additive noise frequently overwhelms the signals of the weight matrices, leading to suboptimal accuracy. To address this vulnerability, we propose FedPower, a differentially private cross-silo FL framework that reshapes server-side aggregation. Instead of perturbing mismatched low-rank factors, FedPower explicitly reconstructs and clips full-rank client updates to bound the sensitivity. The server then projects the exact aggregated update back into a secure low-rank space using PowerDP, a novel differentially private low-rank factorization mechanism. Based on simultaneous subspace iteration, PowerDP injects calibrated DP noise prior to the final orthonormalization step, effectively mitigates the negative effect of DP noise by preserving matrix orthogonality. We provide rigorous theoretical analyses establishing sensitivity bounds for subspace projections, proving that FedPower achieves both sample-level and client-level DP. Extensive experiments on various language understanding tasks in cross-silo FL settings show that FedPower is robust against tight privacy budgets while adding negligible computational overheads. Additional empirical study on different DP noise injection schemes validates the effectiveness of PowerDP in improving the tradeoff in accuracy and privacy. Evaluation on three different membership inference attacks validates the robustness and privacy-preserving capability of the proposed framework. Subjects: Cryptography and Security (cs.CR) Cite as: arXiv:2605.08443 [cs.CR]   (or arXiv:2605.08443v1 [cs.CR] for this version)   https://doi.org/10.48550/arXiv.2605.08443 Focus to learn more Submission history From: Linh Tran [view email] [v1] Fri, 8 May 2026 20:06:09 UTC (2,022 KB) Access Paper: HTML (experimental) view license Current browse context: cs.CR < prev   |   next > new | recent | 2026-05 Change to browse by: cs References & Citations NASA ADS Google Scholar Semantic Scholar Export BibTeX Citation Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Demos Related Papers About arXivLabs Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
    💬 Team Notes
    Article Info
    Source
    arXiv Security
    Category
    ◬ AI & Machine Learning
    Published
    May 12, 2026
    Archived
    May 12, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗