Tailoring Gradient Methods for Differentially-Private Distributed Optimization

02/02/2022
by   Yongqiang Wang, et al.
0

Decentralized optimization is gaining increased traction due to its widespread applications in large-scale machine learning and multi-agent systems. The same mechanism that enables its success, i.e., information sharing among participating agents, however, also leads to the disclosure of individual agents' private information, which is unacceptable when sensitive data are involved. As differential privacy is becoming a de facto standard for privacy preservation, recently results have emerged integrating differential privacy with distributed optimization. Although such differential-privacy based privacy approaches for distributed optimization are efficient in both computation and communication, directly incorporating differential privacy design in existing distributed optimization approaches significantly compromises optimization accuracy. In this paper, we propose to redesign and tailor gradient methods for differentially-private distributed optimization, and propose two differential-privacy oriented gradient methods that can ensure both privacy and optimality. We prove that the proposed distributed algorithms can ensure almost sure convergence to an optimal solution under any persistent and variance-bounded differential-privacy noise, which, to the best of our knowledge, has not been reported before. The first algorithm is based on static-consensus based gradient methods and only shares one variable in each iteration. The second algorithm is based on dynamic-consensus (gradient-tracking) based distributed optimization methods and, hence, it is applicable to general directed interaction graph topologies. Numerical comparisons with existing counterparts confirm the effectiveness of the proposed approaches.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset