Adversarial Example Generation

02/01/2019
by   Yatie Xiao, et al.
0

Deep Neural Networks have achieved remarkable success in computer vision, and audio tasks, etc. However, in classification domains, deep neural models are easily fooled by adversarial examples. Many attack methods generate adversarial examples with large image distortion and low similarity between origin and corresponding adversarial examples, to address these issues, we propose an adversarial method with an adaptive gradient in a direction to generate perturbations, it generates perturbations which can escape local minimal. In this paper, we evaluate several traditional perturbations creating methods in image classification with ours. Experimental results show that our approach works well and outperform recent techniques in the change of misclassifying image classification, and excellent efficiency in fooling deep network models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset