LEASGD: an Efficient and Privacy-Preserving Decentralized Algorithm for Distributed Learning

11/27/2018
by   Hsin-Pai Cheng, et al.
0

Distributed learning systems have enabled training large-scale models over large amount of data in significantly shorter time. In this paper, we focus on decentralized distributed deep learning systems and aim to achieve differential privacy with good convergence rate and low communication cost. To achieve this goal, we propose a new learning algorithm LEASGD (Leader-Follower Elastic Averaging Stochastic Gradient Descent), which is driven by a novel Leader-Follower topology and a differential privacy model.We provide a theoretical analysis of the convergence rate and the trade-off between the performance and privacy in the private setting.The experimental results show that LEASGD outperforms state-of-the-art decentralized learning algorithm DPSGD by achieving steadily lower loss within the same iterations and by reducing the communication cost by 30 budget and has higher final accuracy result than DPSGD under private setting.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset