Federated Learning's Blessing: FedAvg has Linear Speedup

by   Zhaonan Qu, et al.

Federated learning (FL) learns a model jointly from a set of participating devices without sharing each other's privately held data. The characteristics of non-iid data across the network, low device participation, and the mandate that data remain private bring challenges in understanding the convergence of FL algorithms, particularly in regards to how convergence scales with the number of participating devices. In this paper, we focus on Federated Averaging (FedAvg)–the most widely used and effective FL algorithm in use today–and provide a comprehensive study of its convergence rate. Although FedAvg has recently been studied by an emerging line of literature, it remains open as to how FedAvg's convergence scales with the number of participating devices in the FL setting–a crucial question whose answer would shed light on the performance of FedAvg in large FL systems. We fill this gap by establishing convergence guarantees for FedAvg under three classes of problems: strongly convex smooth, convex smooth, and overparameterized strongly convex smooth problems. We show that FedAvg enjoys linear speedup in each case, although with different convergence rates. For each class, we also characterize the corresponding convergence rates for the Nesterov accelerated FedAvg algorithm in the FL setting: to the best of our knowledge, these are the first linear speedup guarantees for FedAvg when Nesterov acceleration is used. To accelerate FedAvg, we also design a new momentum-based FL algorithm that further improves the convergence rate in overparameterized linear regression problems. Empirical studies of the algorithms in various settings have supported our theoretical results.


page 1

page 2

page 3

page 4


Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning

Federated learning (FL) is a distributed machine learning architecture t...

On Convergence of FedProx: Local Dissimilarity Invariant Bounds, Non-smoothness and Beyond

The FedProx algorithm is a simple yet powerful distributed proximal poin...

Taming Fat-Tailed ("Heavier-Tailed” with Potentially Infinite Variance) Noise in Federated Learning

A key assumption in most existing works on FL algorithms' convergence an...

Fast Federated Learning in the Presence of Arbitrary Device Unavailability

Federated Learning (FL) coordinates with numerous heterogeneous devices ...

Fast Composite Optimization and Statistical Recovery in Federated Learning

As a prevalent distributed learning paradigm, Federated Learning (FL) tr...

NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data

Federated learning (FL) has received a surge of interest in recent years...

FedMT: Federated Learning with Mixed-type Labels

In federated learning (FL), classifiers (e.g., deep networks) are traine...

Please sign up or login with your details

Forgot password? Click here to reset