Colored Noise Injection for Training Adversarially Robust Neural Networks

03/04/2020
by   Evgenii Zheltonozhskii, et al.
4

Even though deep learning have shown unmatched performance on various tasks, neural networks has been shown to be vulnerable to small adversarial perturbation of the input which lead to significant performance degradation. In this work we extend the idea of adding independent Gaussian noise to weights and activation during adversarial training (PNI) to injection of colored noise for defense against common white-box and black-box attacks. We show that our approach outperforms PNI and various previous approaches in terms of adversarial accuracy on CIFAR-10 dataset. In addition, we provide an extensive ablation study of the proposed method justifying the chosen configurations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset