The Adversarial Attack and Detection under the Fisher Information Metric

10/09/2018
by   Chenxiao Zhao, et al.
0

Many deep learning models are vulnerable to the adversarial attack, i.e., imperceptible but intentionally-designed perturbations to the input can cause incorrect output of the networks. In this paper, using information geometry, we provide a reasonable explanation for the vulnerability of deep learning models. By considering the data space as a non-linear space with the Fisher information metric induced from a neural network, we first propose an adversarial attack algorithm termed one-step spectral attack (OSSA). The method is described by a constrained quadratic form of the Fisher information matrix, where the optimal adversarial perturbation is given by the first eigenvector, and the model vulnerability is reflected by the eigenvalues. The larger an eigenvalue is, the more vulnerable the model is to be attacked by the corresponding eigenvector. Taking advantage of the property, we also propose an adversarial detection method with the eigenvalues serving as characteristics. Both our attack and detection algorithms are numerically optimized to work efficiently on large datasets. Our evaluations show superior performance compared with other methods, implying that the Fisher information is a promising approach to investigate the adversarial attacks and defenses.

READ FULL TEXT
research
10/13/2020

Towards Understanding Pixel Vulnerability under Adversarial Attacks for Images

Deep neural network image classifiers are reported to be susceptible to ...
research
11/07/2022

Deviations in Representations Induced by Adversarial Attacks

Deep learning has been a popular topic and has achieved success in many ...
research
05/18/2020

Universalization of any adversarial attack using very few test examples

Deep learning models are known to be vulnerable not only to input-depend...
research
09/13/2019

Defending Against Adversarial Attacks by Suppressing the Largest Eigenvalue of Fisher Information Matrix

We propose a scheme for defending against adversarial attacks by suppres...
research
03/02/2022

Canonical foliations of neural networks: application to robustness

Adversarial attack is an emerging threat to the trustability of machine ...
research
10/05/2020

Adversarial Boot Camp: label free certified robustness in one epoch

Machine learning models are vulnerable to adversarial attacks. One appro...
research
02/18/2020

Block Switching: A Stochastic Approach for Deep Learning Security

Recent study of adversarial attacks has revealed the vulnerability of mo...

Please sign up or login with your details

Forgot password? Click here to reset