Lossy Gradient Compression: How Much Accuracy Can One Bit Buy?

02/06/2022
by   Sadaf Salehkalaibar, et al.
0

In federated learning (FL), a global model is trained at a Parameter Server (PS) by aggregating model updates obtained from multiple remote learners. Critically, the communication between the remote users and the PS is limited by the available power for transmission, while the transmission from the PS to the remote users can be considered unbounded. This gives rise to the distributed learning scenario in which the updates from the remote learners have to be compressed so as to meet communication rate constraints in the uplink transmission toward the PS. For this problem, one would like to compress the model updates so as to minimize the resulting loss in accuracy. In this paper, we take a rate-distortion approach to answer this question for the distributed training of a deep neural network (DNN). In particular, we define a measure of the compression performance, the per-bit accuracy, which addresses the ultimate model accuracy that a bit of communication brings to the centralized model. In order to maximize the per-bit accuracy, we consider modeling the gradient updates at remote learners as a generalized normal distribution. Under this assumption on the model update distribution, we propose a class of distortion measures for the design of quantizer for the compression of the model updates. We argue that this family of distortion measures, which we refer to as "M-magnitude weighted L_2" norm, capture the practitioner intuition in the choice of gradient compressor. Numerical simulations are provided to validate the proposed approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset