This work proposes FedLAP-DP, a novel privacy-preserving approach for federated learning. Unlike previous linear point-wise gradient-sharing schemes, such as FedAvg, our formulation enables a type of global optimization by leveraging synthetic samples received from clients. These synthetic samples, serving as loss surrogates, approximate local loss landscapes by simulating the utility of real images within a local region. We additionally introduce an approach to measure effective approximation regions reflecting the quality of the approximation. Therefore, the server can recover an approximation of the global loss landscape and optimize the model globally. Moreover, motivated by the emerging privacy concerns, we demonstrate that our approach seamlessly works with record-level differential privacy (DP), granting theoretical privacy guarantees for every data record on the clients. Extensive results validate the efficacy of our formulation on various datasets with highly skewed distributions. Our method consistently improves over the baselines, especially considering highly skewed distributions and noisy gradients due to DP. The source code and setup will be released upon publication.
History
Primary Research Area
Trustworthy Information Processing
Open Access Type
Green
BibTeX
@misc{Wang:Chen:Kerkouche:Fritz:2023,
title = "FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations",
author = "Wang, Hui-Po" AND "Chen, Dingfan" AND "Kerkouche, Raouf" AND "Fritz, Mario",
year = 2023,
month = 2,
doi = "10.48550/arxiv.2302.01068"
}