Ensemble-in-One: Learning Ensemble within Random Gated Networks for Enhanced Adversarial Robustness

by   Yi Cai, et al.

Adversarial attacks have rendered high security risks on modern deep learning systems. Adversarial training can significantly enhance the robustness of neural network models by suppressing the non-robust features. However, the models often suffer from significant accuracy loss on clean data. Ensemble training methods have emerged as promising solutions for defending against adversarial attacks by diversifying the vulnerabilities among the sub-models, simultaneously maintaining comparable accuracy as standard training. However, existing ensemble methods are with poor scalability, owing to the rapid complexity increase when including more sub-models in the ensemble. Moreover, in real-world applications, it is difficult to deploy an ensemble with multiple sub-models, owing to the tight hardware resource budget and latency requirement. In this work, we propose ensemble-in-one (EIO), a simple but efficient way to train an ensemble within one random gated network (RGN). EIO augments the original model by replacing the parameterized layers with multi-path random gated blocks (RGBs) to construct a RGN. By diversifying the vulnerability of the numerous paths within the RGN, better robustness can be achieved. It provides high scalability because the paths within an EIO network exponentially increase with the network depth. Our experiments demonstrate that EIO consistently outperforms previous ensemble training methods with even less computational overhead.


DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles

Recent research finds CNN models for image classification demonstrate ov...

Robust Ensemble Morph Detection with Domain Generalization

Although a substantial amount of studies is dedicated to morph detection...

Self-Ensemble Adversarial Training for Improved Robustness

Due to numerous breakthroughs in real-world applications brought by mach...

Ensemble Defense with Data Diversity: Weak Correlation Implies Strong Robustness

In this paper, we propose a framework of filter-based ensemble of deep n...

SIENA: Stochastic Multi-Expert Neural Patcher

Neural network (NN) models that are solely trained to maximize the likel...

Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy

A necessary characteristic for the deployment of deep learning models in...

Strength in Numbers: Trading-off Robustness and Computation via Adversarially-Trained Ensembles

While deep learning has led to remarkable results on a number of challen...

Please sign up or login with your details

Forgot password? Click here to reset