DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks

by   Shiwen Ni, et al.

Adversarial training has been proven to be a powerful regularization method to improve the generalization of models. However, current adversarial training methods only attack the original input sample or the embedding vectors, and their attacks lack coverage and diversity. To further enhance the breadth and depth of attack, we propose a novel masked weight adversarial training method called DropAttack, which enhances generalization of model by adding intentionally worst-case adversarial perturbations to both the input and hidden layers in different dimensions and minimize the adversarial risks generated by each layer. DropAttack is a general technique and can be adopt to a wide variety of neural networks with different architectures. To validate the effectiveness of the proposed method, we used five public datasets in the fields of natural language processing (NLP) and computer vision (CV) for experimental evaluating. We compare the proposed method with other adversarial training methods and regularization methods, and our method achieves state-of-the-art on all datasets. In addition, Dropattack can achieve the same performance when it use only a half training data compared to other standard training method. Theoretical analysis reveals that DropAttack can perform gradient regularization at random on some of the input and wight parameters of the model. Further visualization experiments show that DropAttack can push the minimum risk of the model to a lower and flatter loss landscapes. Our source code is publicly available on https://github.com/nishiwen1214/DropAttack.


page 1

page 2

page 3

page 4


CAT:Collaborative Adversarial Training

Adversarial training can improve the robustness of neural networks. Prev...

Adversarial Noise Layer: Regularize Neural Network By Adding Noise

In this paper, we introduce a novel regularization method called Adversa...

ARCH: Efficient Adversarial Regularized Training with Caching

Adversarial regularization can improve model generalization in many natu...

Towards Robust Stacked Capsule Autoencoder with Hybrid Adversarial Training

Capsule networks (CapsNets) are new neural networks that classify images...

Enhance the Visual Representation via Discrete Adversarial Training

Adversarial Training (AT), which is commonly accepted as one of the most...

Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks

Adversarial training (AT) with imperfect supervision is significant but ...

Is PGD-Adversarial Training Necessary? Alternative Training via a Soft-Quantization Network with Noisy-Natural Samples Only

Recent work on adversarial attack and defense suggests that PGD is a uni...

Please sign up or login with your details

Forgot password? Click here to reset