Adaptive Gradient Refinement for Adversarial Perturbation Generation

02/01/2019
by   Yatie Xiao, et al.
0

Deep Neural Networks have achieved remarkable success in computer vision, natural language processing, and audio tasks. However, in classification domains, researches proved that Deep neural models are easily fooled and make different or wrong classification prediction, which may cause server results. Many attack methods generate adversarial perturbation with large-scale pixel modification and low cosine-similarity between origin and corresponding adversarial examples, to address these issues, we propose an adversarial method with adaptive adjusting perturbation strength and update gradient in direction to generate attacks, it generate perturbation tensors by adjusting its strength adaptively and update gradient in direction which can escape local minimal or maximal by combining with previous calculate history gradient. In this paper, we evaluate several traditional perturbations creating methods in image classification with ours. Experimental results show that our approach works well and outperform recent techniques in the change of misclassifying image classification, and excellent efficiency in fooling deep network models.

READ FULL TEXT

page 1

page 2

research
02/01/2019

Adversarial Example Generation

Deep Neural Networks have achieved remarkable success in computer vision...
research
01/13/2019

Generating Adversarial Perturbation with Root Mean Square Gradient

Deep Neural Models are vulnerable to adversarial perturbations in classi...
research
04/12/2019

Big but Imperceptible Adversarial Perturbations via Semantic Manipulation

Machine learning, especially deep learning, is widely applied to a range...
research
11/27/2021

Adaptive Perturbation for Adversarial Attack

In recent years, the security of deep learning models achieves more and ...
research
02/03/2021

IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks

The widespread application of deep neural network (DNN) techniques is be...
research
03/26/2018

Clipping free attacks against artificial neural networks

During the last years, a remarkable breakthrough has been made in AI dom...
research
02/15/2021

Certified Robustness to Programmable Transformations in LSTMs

Deep neural networks for natural language processing are fragile in the ...

Please sign up or login with your details

Forgot password? Click here to reset