Asynchronous Federated Optimization

03/10/2019
by   Cong Xie, et al.
0

Federated learning enables training on a massive number of edge devices. To improve flexibility and scalability, we propose a new asynchronous federated optimization algorithm. We prove that the proposed approach has near-linear convergence to a global optimum, for both strongly and non-strongly convex problems, as well as a restricted family of non-convex problems. Empirical results show that the proposed algorithm converges fast and tolerates staleness.

READ FULL TEXT
research
03/16/2019

Practical Distributed Learning: Secure Machine Learning with Communication-Efficient Local Updates

Federated learning on edge devices poses new challenges arising from wor...
research
02/12/2021

Stragglers Are Not Disaster: A Hybrid Federated Learning Algorithm with Delayed Gradients

Federated learning (FL) is a new machine learning framework which trains...
research
10/18/2016

Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server

This paper presents an asynchronous incremental aggregated gradient algo...
research
07/25/2023

Federated Distributionally Robust Optimization with Non-Convex Objectives: Algorithm and Analysis

Distributionally Robust Optimization (DRO), which aims to find an optima...
research
07/12/2023

Locally Adaptive Federated Learning via Stochastic Polyak Stepsizes

State-of-the-art federated learning algorithms such as FedAvg require ca...
research
03/16/2019

SLSGD: Secure and Efficient Distributed On-device Machine Learning

We consider distributed on-device learning with limited communication an...
research
10/14/2021

Resource-constrained Federated Edge Learning with Heterogeneous Data: Formulation and Analysis

Efficient collaboration between collaborative machine learning and wirel...

Please sign up or login with your details

Forgot password? Click here to reset