Performance Analysis and Optimization in Privacy-Preserving Federated Learning

02/29/2020
by   Kang Wei, et al.
0

As a means of decentralized machine learning, federated learning (FL) has recently drawn considerable attentions. One of the prominent advantages of FL is its capability of preventing clients' data from being directly exposed to external adversaries. Nevertheless, via a viewpoint of information theory, it is still possible for an attacker to steal private information from eavesdropping upon the shared models uploaded by FL clients. In order to address this problem, we develop a novel privacy preserving FL framework based on the concept of differential privacy (DP). To be specific, we first borrow the concept of local DP and introduce a client-level DP (CDP) by adding artificial noises to the shared models before uploading them to servers. Then, we prove that our proposed CDP algorithm can satisfy the DP guarantee with adjustable privacy protection levels by varying the variances of the artificial noises. More importantly, we derive a theoretical convergence upper-bound of the CDP algorithm. Our derived upper-bound reveals that there exists an optimal number of communication rounds to achieve the best convergence performance in terms of loss function values for a given privacy protection level. Furthermore, to obtain this optimal number of communication rounds, which cannot be derived in a closed-form expression, we propose a communication rounds discounting (CRD) method. Compared with the heuristic searching method, our proposed CRD can achieve a much better trade-off between the computational complexity of searching for the optimal number and the convergence performance. Extensive experiments indicate that our CDP algorithm with an optimization on the number of communication rounds using the proposed CRD can effectively improve both the FL training efficiency and FL model quality for a given privacy protection level.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset