Adversarial Examples in Deep Learning: Characterization and Divergence

06/29/2018
by   Wenqi Wei, et al.
0

The burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data. Adversarial attacks in deep learning have emerged as one of the dominating security threat to a range of mission-critical deep learning systems and applications. This paper takes a holistic and principled approach to perform statistical characterization of adversarial examples in deep learning. We provide a general formulation of adversarial examples and elaborate on the basic principle for adversarial attack algorithm design. We introduce easy and hard categorization of adversarial attacks to analyze the effectiveness of adversarial examples in terms of attack success rate, degree of change in adversarial perturbation, average entropy of prediction qualities, and fraction of adversarial examples that lead to successful attacks. We conduct extensive experimental study on adversarial behavior in easy and hard attacks under deep learning models with different hyperparameters and different deep learning frameworks. We show that the same adversarial attack behaves differently under different hyperparameters and across different frameworks due to the different features learned under different deep learning model training process. Our statistical characterization with strong empirical evidence provides a transformative enlightenment on mitigation strategies towards effective countermeasures against present and future adversarial attacks.

READ FULL TEXT

page 5

page 7

page 9

page 10

page 11

research
10/06/2021

Reversible adversarial examples against local visual perturbation

Recently, studies have indicated that adversarial attacks pose a threat ...
research
09/28/2018

Adversarial Attacks and Defences: A Survey

Deep learning has emerged as a strong and efficient framework that can b...
research
06/19/2020

Adversarial Attacks for Multi-view Deep Models

Recent work has highlighted the vulnerability of many deep machine learn...
research
07/16/2023

On the Robustness of Split Learning against Adversarial Attacks

Split learning enables collaborative deep learning model training while ...
research
06/04/2020

Characterizing the Weight Space for Different Learning Models

Deep Learning has become one of the primary research areas in developing...
research
02/14/2019

Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?

Convolutional Neural Networks and Deep Learning classification systems i...
research
04/15/2018

Adversarial Attacks Against Medical Deep Learning Systems

The discovery of adversarial examples has raised concerns about the prac...

Please sign up or login with your details

Forgot password? Click here to reset