Model-Based Robust Deep Learning

05/20/2020
by   Alexander Robey, et al.
81

While deep learning has resulted in major breakthroughs in many application domains, the frameworks commonly used in deep learning remain fragile to artificially-crafted and imperceptible changes in the data. In response to this fragility, adversarial training has emerged as a principled approach for enhancing the robustness of deep learning with respect to norm-bounded perturbations. However, there are other sources of fragility for deep learning that are arguably more common and less thoroughly studied. Indeed, natural variation such as lighting or weather conditions can significantly degrade the accuracy of trained neural networks, proving that such natural variation presents a significant challenge for deep learning. In this paper, we propose a paradigm shift from perturbation-based adversarial robustness toward model-based robust deep learning. Our objective is to provide general training algorithms that can be used to train deep neural networks to be robust against natural variation in data. Critical to our paradigm is first obtaining a model of natural variation which can be used to vary data over a range of natural conditions. Such models may be either known a priori or else learned from data. In the latter case, we show that deep generative models can be used to learn models of natural variation that are consistent with realistic conditions. We then exploit such models in three novel model-based robust training algorithms in order to enhance the robustness of deep learning with respect to the given model. Our extensive experiments show that across a variety of naturally-occurring conditions and across various datasets, deep neural networks trained with our model-based algorithms significantly outperform both standard deep learning algorithms as well as norm-bounded robust deep learning algorithms.

READ FULL TEXT

page 20

page 27

page 28

page 29

page 33

page 35

page 36

page 41

research
09/30/2018

On Regularization and Robustness of Deep Neural Networks

Despite their success, deep neural networks suffer from several drawback...
research
04/08/2023

Robust Deep Learning Models Against Semantic-Preserving Adversarial Attack

Deep learning models can be fooled by small l_p-norm adversarial perturb...
research
12/01/2020

Adversarial Robustness Across Representation Spaces

Adversarial robustness corresponds to the susceptibility of deep neural ...
research
07/08/2020

Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs

Network security applications, including intrusion detection systems of ...
research
06/07/2019

Inductive Bias of Gradient Descent based Adversarial Training on Separable Data

Adversarial training is a principled approach for training robust neural...
research
03/30/2020

Towards Deep Learning Models Resistant to Large Perturbations

Adversarial robustness has proven to be a required property of machine l...
research
11/21/2022

Addressing Mistake Severity in Neural Networks with Semantic Knowledge

Robustness in deep neural networks and machine learning algorithms in ge...

Please sign up or login with your details

Forgot password? Click here to reset