Performance Analysis and Optimization in Privacy-Preserving Federated Learning

02/29/2020
by   Kang Wei, et al.
0

As a means of decentralized machine learning, federated learning (FL) has recently drawn considerable attentions. One of the prominent advantages of FL is its capability of preventing clients' data from being directly exposed to external adversaries. Nevertheless, via a viewpoint of information theory, it is still possible for an attacker to steal private information from eavesdropping upon the shared models uploaded by FL clients. In order to address this problem, we develop a novel privacy preserving FL framework based on the concept of differential privacy (DP). To be specific, we first borrow the concept of local DP and introduce a client-level DP (CDP) by adding artificial noises to the shared models before uploading them to servers. Then, we prove that our proposed CDP algorithm can satisfy the DP guarantee with adjustable privacy protection levels by varying the variances of the artificial noises. More importantly, we derive a theoretical convergence upper-bound of the CDP algorithm. Our derived upper-bound reveals that there exists an optimal number of communication rounds to achieve the best convergence performance in terms of loss function values for a given privacy protection level. Furthermore, to obtain this optimal number of communication rounds, which cannot be derived in a closed-form expression, we propose a communication rounds discounting (CRD) method. Compared with the heuristic searching method, our proposed CRD can achieve a much better trade-off between the computational complexity of searching for the optimal number and the convergence performance. Extensive experiments indicate that our CDP algorithm with an optimization on the number of communication rounds using the proposed CRD can effectively improve both the FL training efficiency and FL model quality for a given privacy protection level.

READ FULL TEXT

page 1

page 9

research
11/01/2019

Federated Learning with Differential Privacy: Algorithms and Performance Analysis

In this paper, to effectively prevent information leakage, we propose a ...
research
11/01/2019

Performance Analysis on Federated Learning with Differential Privacy

In this paper, to effectively prevent the differential attack, we propos...
research
03/07/2023

Amplitude-Varying Perturbation for Balancing Privacy and Utility in Federated Learning

While preserving the privacy of federated learning (FL), differential pr...
research
06/21/2022

sqSGD: Locally Private and Communication Efficient Federated Learning

Federated learning (FL) is a technique that trains machine learning mode...
research
01/11/2021

On the Practicality of Differential Privacy in Federated Learning by Tuning Iteration Times

In spite that Federated Learning (FL) is well known for its privacy prot...
research
12/02/2020

Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with Lazy Clients

Federated learning (FL), as a distributed machine learning approach, has...
research
09/23/2022

Privacy-Preserving Online Content Moderation: A Federated Learning Use Case

Users are daily exposed to a large volume of harmful content on various ...

Please sign up or login with your details

Forgot password? Click here to reset