Coding for Straggler Mitigation in Federated Learning

09/30/2021
by   Siddhartha Kumar, et al.
0

We present a novel coded federated learning (FL) scheme for linear regression that mitigates the effect of straggling devices while retaining the privacy level of conventional FL. The proposed scheme combines one-time padding to preserve privacy and gradient codes to yield resiliency against stragglers and consists of two phases. In the first phase, the devices share a one-time padded version of their local data with a subset of other devices. In the second phase, the devices and the central server collaboratively and iteratively train a global linear model using gradient codes on the one-time padded local data. To apply one-time padding to real data, our scheme exploits a fixed-point arithmetic representation of the data. Unlike the coded FL scheme recently introduced by Prakash et al., the proposed scheme maintains the same level of privacy as conventional FL while achieving a similar training time. Compared to conventional FL, we show that the proposed scheme achieves a training speed-up factor of 6.6 and 9.2 on the MNIST and Fashion-MNIST datasets for an accuracy of 95% and 85%, respectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset