Distributed Non-Convex Optimization with Sublinear Speedup under Intermittent Client Availability

02/18/2020
by   Yikai Yan, et al.
0

Federated learning is a new distributed machine learning framework, where a bunch of heterogeneous clients collaboratively train a model without sharing training data. In this work, we consider a practical and ubiquitous issue in federated learning: intermittent client availability, where the set of eligible clients may change during the training process. Such an intermittent client availability model would significantly deteriorate the performance of the classical Federated Averaging algorithm (FedAvg for short). We propose a simple distributed non-convex optimization algorithm, called Federated Latest Averaging (FedLaAvg for short), which leverages the latest gradients of all clients, even when the clients are not available, to jointly update the global model in each iteration. Our theoretical analysis shows that FedLaAvg attains the convergence rate of O(1/(N^1/4 T^1/2)), achieving a sublinear speedup with respect to the total number of clients. We implement and evaluate FedLaAvg with the CIFAR-10 dataset. The evaluation results demonstrate that FedLaAvg indeed reaches a sublinear speedup and achieves 4.23 FedAvg.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2023

Achieving Linear Speedup in Non-IID Federated Bilevel Learning

Federated bilevel optimization has received increasing attention in vari...
research
02/18/2020

Distributed Optimization over Block-Cyclic Data

We consider practical data characteristics underlying federated learning...
research
12/17/2021

Federated Learning with Heterogeneous Data: A Superquantile Optimization Approach

We present a federated learning framework that is designed to robustly d...
research
09/18/2023

FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup for Non-IID Data

Federated learning is an emerging distributed machine learning method, e...
research
08/19/2021

Communication-Efficient Federated Learning via Robust Distributed Mean Estimation

Federated learning commonly relies on algorithms such as distributed (mi...
research
10/14/2019

SCAFFOLD: Stochastic Controlled Averaging for On-Device Federated Learning

Federated learning is a key scenario in modern large-scale machine learn...
research
02/12/2021

Efficient Algorithms for Federated Saddle Point Optimization

We consider strongly convex-concave minimax problems in the federated se...

Please sign up or login with your details

Forgot password? Click here to reset