CISPA
Browse

FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations

Download (5.82 MB)
conference contribution
posted on 2024-03-20, 13:12 authored by Hui-Po Wang, Dingfan Chen, Raouf Kerkouche, Mario Fritz
This work proposes FedLAP-DP, a novel privacy-preserving approach for federated learning. Unlike previous linear point-wise gradient-sharing schemes, such as FedAvg, our formulation enables a type of global optimization by leveraging synthetic samples received from clients. These synthetic samples, serving as loss surrogates, approximate local loss landscapes by simulating the utility of real images within a local region. We additionally introduce an approach to measure effective approximation regions reflecting the quality of the approximation. Therefore, the server can recover an approximation of the global loss landscape and optimize the model globally. Moreover, motivated by the emerging privacy concerns, we demonstrate that our approach seamlessly works with record-level differential privacy (DP), granting theoretical privacy guarantees for every data record on the clients. Extensive results validate the efficacy of our formulation on various datasets with highly skewed distributions. Our method consistently improves over the baselines, especially considering highly skewed distributions and noisy gradients due to DP. The source code will be released upon publication.

History

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

International Conference on Learning Representations (ICLR)

BibTeX

@conference{Wang:Chen:Kerkouche:Fritz:2023, title = "FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations", author = "Wang, Hui-Po" AND "Chen, Dingfan" AND "Kerkouche, Raouf" AND "Fritz, Mario", year = 2023, month = 9 }

Usage metrics

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC