On the Connection between Differential Privacy and Adversarial Robustness in Machine Learning

02/09/2018
by   Mathias Lecuyer, et al.
0

Adversarial examples in machine learning has been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best-effort, heuristic approaches that have all been shown to be vulnerable to sophisticated attacks. More recently, rigorous defenses that provide formal guarantees have emerged, but are hard to scale or generalize. A rigorous and general foundation for designing defenses is required to get us off this arms race trajectory. We propose leveraging differential privacy (DP) as a formal building block for robustness against adversarial examples. We observe that the semantic of DP is closely aligned with the formal definition of robustness to adversarial examples. We propose PixelDP, a strategy for learning robust deep neural networks based on formal DP guarantees. PixelDP networks give theoretical guarantees for a subset of their predictions regarding the robustness against adversarial perturbations of bounded size. Our evaluation with MNIST, CIFAR-10, and CIFAR-100 shows that PixelDP networks achieve accuracy under attack on par with the best-performing defense to date, but additionally certify robustness against meaningful-size 1-norm and 2-norm attacks for 40-60 points to DP as a rigorous, broadly applicable, and mechanism-rich foundation for robust machine learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/23/2019

Preserving Differential Privacy in Adversarial Learning with Provable Robustness

In this paper, we aim to develop a novel mechanism to preserve different...
research
05/17/2021

Gradient Masking and the Underestimated Robustness Threats of Differential Privacy in Deep Learning

An important problem in deep learning is the privacy and security of neu...
research
06/19/2019

A unified view on differential privacy and robustness to adversarial examples

This short note highlights some links between two lines of research with...
research
06/14/2023

Augment then Smooth: Reconciling Differential Privacy with Certified Robustness

Machine learning models are susceptible to a variety of attacks that can...
research
01/29/2018

Certified Defenses against Adversarial Examples

While neural networks have achieved high accuracy on standard image clas...
research
01/24/2021

A Comprehensive Evaluation Framework for Deep Model Robustness

Deep neural networks (DNNs) have achieved remarkable performance across ...
research
06/23/2019

Defending Against Adversarial Examples with K-Nearest Neighbor

Robustness is an increasingly important property of machine learning mod...

Please sign up or login with your details

Forgot password? Click here to reset