DONE: Distributed Newton-type Method for Federated Edge Learning

12/10/2020
by   Canh T. Dinh, et al.
0

There is growing interest in applying distributed machine learning to edge computing, forming federated edge learning. Compared with conventional distributed machine learning in a datacenter, federated edge learning faces non-independent and identically distributed (non-i.i.d.) and heterogeneous data, and the communications between edge workers, possibly through distant locations with unstable wireless networks, are more costly than their local computational overhead. In this work, we propose a distributed Newton-type algorithm (DONE) with fast convergence rate for communication-efficient federated edge learning. First, with strongly convex and smooth loss functions, we show that DONE can produce the Newton direction approximately in a distributed manner by using the classical Richardson iteration on each edge worker. Second, we prove that DONE has linear-quadratic convergence and analyze its computation and communication complexities. Finally, the experimental results with non-i.i.d. and heterogeneous data show that DONE attains the same performance as the Newton's method. Notably, DONE requires considerably fewer communication iterations compared to the distributed gradient descent algorithm and outperforms DANE, a state-of-the-art, in the case of non-quadratic loss functions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset