Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack

by   Adnan Siraj Rakin, et al.

Recent development in the field of Deep Learning have exposed the underlying vulnerability of Deep Neural Network (DNN) against adversarial examples. In image classification, an adversarial example is a carefully modified image that is visually imperceptible to the original image but can cause DNN model to misclassify it. Training the network with Gaussian noise is an effective technique to perform model regularization, thus improving model robustness against input variation. Inspired by this classical method, we explore to utilize the regularization characteristic of noise injection to improve DNN's robustness against adversarial attack. In this work, we propose Parametric-Noise-Injection (PNI) which involves trainable Gaussian noise injection at each layer on either activation or weights through solving the min-max optimization problem, embedded with adversarial training. These parameters are trained explicitly to achieve improved robustness. To the best of our knowledge, this is the first work that uses trainable noise injection to improve network robustness against adversarial attacks, rather than manually configuring the injected noise level through cross-validation. The extensive results show that our proposed PNI technique effectively improves the robustness against a variety of powerful white-box and black-box attacks such as PGD, C & W, FGSM, transferable attack and ZOO attack. Last but not the least, PNI method improves both clean- and perturbed-data accuracy in comparison to the state-of-the-art defense methods, which outperforms current unbroken PGD defense by 1.1 data respectively using Resnet-20 architecture.


Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness

Deep Neural Network (DNN) trained by the gradient descent method is know...

Colored Noise Injection for Training Adversarially Robust Neural Networks

Even though deep learning have shown unmatched performance on various ta...

LSTM-based Load Forecasting Robustness Against Noise Injection Attack in Microgrid

In this paper, we investigate the robustness of an LSTM neural network a...

Neural SDE: Stabilizing Neural ODE Networks with Stochastic Noise

Neural Ordinary Differential Equation (Neural ODE) has been proposed as ...

Adversarial Sampling for Fairness Testing in Deep Neural Network

In this research, we focus on the usage of adversarial sampling to test ...

Adversarial Robustness Against Image Color Transformation within Parametric Filter Space

We propose Adversarial Color Enhancement (ACE), a novel approach to gene...

Please sign up or login with your details

Forgot password? Click here to reset