CISPA
Browse

Locally Adaptive Federated Learning

Download (1.69 MB)
journal contribution
posted on 2024-10-10, 13:01 authored by Sohom Mukherjee, Nicolas Loizou, Sebastian StichSebastian Stich
Federated learning is a paradigm of distributed machine learning in which multiple clients coordinate with a central server to learn a model, without sharing their own training data. Standard federated optimization methods such as Federated Averaging (FedAvg) ensure balance among the clients by using the same stepsize for local updates on all clients. However, this means that all clients need to respect the global geometry of the function which could yield slow convergence. In this work, we propose locally adaptive federated learning algorithms, that leverage the local geometric information for each client function. We show that such locally adaptive methods with uncoordinated stepsizes across all clients can be particularly efficient in interpolated (overparameterized) settings, and analyze their convergence in the presence of heterogeneous data for convex and strongly convex settings. We validate our theoretical claims by performing illustrative experiments for both i.i.d. non-i.i.d. cases. Our proposed algorithms match the optimization performance of tuned FedAvg in the convex setting, outperform FedAvg as well as state-of-the-art adaptive federated algorithms like FedAMS for non-convex experiments, and come with superior generalization performance.

History

Primary Research Area

  • Trustworthy Information Processing

Journal

Transactions on Machine Learning Research

Sub Type

  • Article

BibTeX

@article{Mukherjee:Loizou:Stich:2024, title = "Locally Adaptive Federated Learning", author = "Mukherjee, Sohom" AND "Loizou, Nicolas" AND "Stich, Sebastian", year = 2024, month = 10, journal = "Transactions on Machine Learning Research" }

Usage metrics

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC