Searching for the Essence of Adversarial Perturbations

05/30/2022
by   Dennis Y. Menn, et al.
1

Neural networks have achieved the state-of-the-art performance in various machine learning fields, yet the incorporation of malicious perturbations with input data (adversarial example) is shown to fool neural networks' predictions. This would lead to potential risks for real-world applications such as endangering autonomous driving and messing up text identification. To mitigate such risks, an understanding of how adversarial examples operate is critical, which however remains unresolved. Here we demonstrate that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction, in contrast to a widely discussed argument that human-imperceptible information plays the critical role in fooling a network. This concept of human-recognizable information allows us to explain key features related to adversarial perturbations, including the existence of adversarial examples, the transferability among different neural networks, and the increased neural network interpretability for adversarial training. Two unique properties in adversarial perturbations that fool neural networks are uncovered: masking and generation. A special class, the complementary class, is identified when neural networks classify input images. The human-recognizable information contained in adversarial perturbations allows researchers to gain insight on the working principles of neural networks and may lead to develop techniques that detect/defense adversarial attacks.

READ FULL TEXT

page 13

page 14

page 17

page 18

page 28

page 29

page 30

page 31

research
06/28/2020

Geometry-Inspired Top-k Adversarial Perturbations

State-of-the-art deep learning models are untrustworthy due to their vul...
research
09/07/2020

Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks

Convolutional and recurrent neural networks have been widely employed to...
research
12/21/2017

ReabsNet: Detecting and Revising Adversarial Examples

Though deep neural network has hit a huge success in recent studies and ...
research
06/09/2022

Early Transferability of Adversarial Examples in Deep Neural Networks

This paper will describe and analyze a new phenomenon that was not known...
research
10/20/2020

Robust Neural Networks inspired by Strong Stability Preserving Runge-Kutta methods

Deep neural networks have achieved state-of-the-art performance in a var...
research
06/25/2019

Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case Study on CNN-Based Lithographic Hotspot Detection

There is substantial interest in the use of machine learning (ML) based ...
research
01/06/2020

Deceiving Image-to-Image Translation Networks for Autonomous Driving with Adversarial Perturbations

Deep neural networks (DNNs) have achieved impressive performance on hand...

Please sign up or login with your details

Forgot password? Click here to reset