Extending Defensive Distillation

05/15/2017
by   Nicolas Papernot, et al.
0

Machine learning is vulnerable to adversarial examples: inputs carefully modified to force misclassification. Designing defenses against such inputs remains largely an open problem. In this work, we revisit defensive distillation---which is one of the mechanisms proposed to mitigate adversarial examples---to address its limitations. We view our results not only as an effective way of addressing some of the recently discovered attacks but also as reinforcing the importance of improved training techniques.

READ FULL TEXT

page 7

page 9

research
07/14/2016

Defensive Distillation is Not Robust to Adversarial Examples

We show that defensive distillation is not secure: it is no more resista...
research
05/20/2017

Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods

Neural networks are known to be vulnerable to adversarial examples: inpu...
research
08/21/2019

Evaluating Defensive Distillation For Defending Text Processing Neural Networks Against Adversarial Examples

Adversarial examples are artificially modified input samples which lead ...
research
06/19/2022

On the Limitations of Stochastic Pre-processing Defenses

Defending against adversarial examples remains an open problem. A common...
research
09/29/2017

Ground-Truth Adversarial Examples

The ability to deploy neural networks in real-world, safety-critical sys...
research
08/12/2020

Learning to Learn from Mistakes: Robust Optimization for Adversarial Noise

Sensitivity to adversarial noise hinders deployment of machine learning ...
research
02/06/2022

Pipe Overflow: Smashing Voice Authentication for Fun and Profit

Recent years have seen a surge of popularity of acoustics-enabled person...

Please sign up or login with your details

Forgot password? Click here to reset