Improved Convergence Analysis and SNR Control Strategies for Federated Learning in the Presence of Noise

07/14/2023
by   Antesh Upadhyay, et al.
0

We propose an improved convergence analysis technique that characterizes the distributed learning paradigm of federated learning (FL) with imperfect/noisy uplink and downlink communications. Such imperfect communication scenarios arise in the practical deployment of FL in emerging communication systems and protocols. The analysis developed in this paper demonstrates, for the first time, that there is an asymmetry in the detrimental effects of uplink and downlink communications in FL. In particular, the adverse effect of the downlink noise is more severe on the convergence of FL algorithms. Using this insight, we propose improved Signal-to-Noise (SNR) control strategies that, discarding the negligible higher-order terms, lead to a similar convergence rate for FL as in the case of a perfect, noise-free communication channel while incurring significantly less power resources compared to existing solutions. In particular, we establish that to maintain the O(1/√(K)) rate of convergence like in the case of noise-free FL, we need to scale down the uplink and downlink noise by Ω(√(k)) and Ω(k) respectively, where k denotes the communication round, k=1,…, K. Our theoretical result is further characterized by two major benefits: firstly, it does not assume the somewhat unrealistic assumption of bounded client dissimilarity, and secondly, it only requires smooth non-convex loss functions, a function class better suited for modern machine learning and deep learning models. We also perform extensive empirical analysis to verify the validity of our theoretical findings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset