On Generation of Adversarial Examples using Convex Programming

03/09/2018
by   Emilio Rafael Balda, et al.
0

It has been observed that deep learning architectures tend to make erroneous decisions with high reliability for particularly designed adversarial instances. In this work, we show that the perturbation analysis of these architectures provides a method for generating adversarial instances by convex programming which, for classification tasks, recovers variants of existing non-adaptive adversarial methods. The proposed method can be used for the design of adversarial noise under various desirable constraints and different types of networks. Furthermore, the core idea of this method is that neural networks can be well approximated by a linear function. Experiments show the competitive performance of the obtained algorithms, in terms of fooling ratio, when benchmarked with well-known adversarial methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset