Self-Ensemble Adversarial Training for Improved Robustness

by   Hongjun Wang, et al.

Due to numerous breakthroughs in real-world applications brought by machine intelligence, deep neural networks (DNNs) are widely employed in critical applications. However, predictions of DNNs are easily manipulated with imperceptible adversarial perturbations, which impedes the further deployment of DNNs and may result in profound security and privacy implications. By incorporating adversarial samples into the training data pool, adversarial training is the strongest principled strategy against various adversarial attacks among all sorts of defense methods. Recent works mainly focus on developing new loss functions or regularizers, attempting to find the unique optimal point in the weight space. But none of them taps the potentials of classifiers obtained from standard adversarial training, especially states on the searching trajectory of training. In this work, we are dedicated to the weight states of models through the training process and devise a simple but powerful Self-Ensemble Adversarial Training (SEAT) method for yielding a robust classifier by averaging weights of history models. This considerably improves the robustness of the target model against several well known adversarial attacks, even merely utilizing the naive cross-entropy loss to supervise. We also discuss the relationship between the ensemble of predictions from different adversarially trained models and the prediction of weight-ensembled models, as well as provide theoretical and empirical evidence that the proposed self-ensemble method provides a smoother loss landscape and better robustness than both individual models and the ensemble of predictions from different classifiers. We further analyze a subtle but fatal issue in the general settings for the self-ensemble model, which causes the deterioration of the weight-ensembled method in the late phases.


page 1

page 2

page 3

page 4


Dissecting Deep Networks into an Ensemble of Generative Classifiers for Robust Predictions

Deep Neural Networks (DNNs) are often criticized for being susceptible t...

Improving adversarial robustness of deep neural networks by using semantic information

The vulnerability of deep neural networks (DNNs) to adversarial attack, ...

A Theoretical Perspective on Subnetwork Contributions to Adversarial Robustness

The robustness of deep neural networks (DNNs) against adversarial attack...

Ensemble-in-One: Learning Ensemble within Random Gated Networks for Enhanced Adversarial Robustness

Adversarial attacks have rendered high security risks on modern deep lea...

Adversarial training with informed data selection

With the increasing amount of available data and advances in computing c...

Adversarial Training with Stochastic Weight Average

Adversarial training deep neural networks often experience serious overf...

Improving Adversarial Robustness for Free with Snapshot Ensemble

Adversarial training, as one of the few certified defenses against adver...

Please sign up or login with your details

Forgot password? Click here to reset