Towards Robust Deep Neural Networks with BANG

12/01/2016
by   Andras Rozsa, et al.
0

Machine learning models, including state-of-the-art deep neural networks, are vulnerable to small perturbations that cause unexpected classification errors. This unexpected lack of robustness raises fundamental questions about their generalization properties and poses a serious concern for practical deployments. As such perturbations can remain imperceptible - commonly called adversarial examples that demonstrate an inherent inconsistency between vulnerable machine learning models and human perception - some prior work casts this problem as a security issue as well. Despite the significance of the discovered instabilities and ensuing research, their cause is not well understood, and no effective method has been developed to address the problem highlighted by adversarial examples. In this paper, we present a novel theory to explain why this unpleasant phenomenon exists in deep neural networks. Based on that theory, we introduce a simple, efficient and effective training approach, Batch Adjusted Network Gradients (BANG), which significantly improves the robustness of machine learning models. While the BANG technique does not rely on any form of data augmentation or the application of adversarial images for training, the resultant classifiers are more resistant to adversarial perturbations while maintaining or even enhancing classification performance overall.

READ FULL TEXT

page 1

page 6

page 8

research
10/14/2016

Are Accuracy and Robustness Correlated?

Machine learning models are vulnerable to adversarial examples formed by...
research
08/30/2022

Robustness and invariance properties of image classifiers

Deep neural networks have achieved impressive results in many image clas...
research
08/08/2022

Abutting Grating Illusion: Cognitive Challenge to Neural Network Models

Even the state-of-the-art deep learning models lack fundamental abilitie...
research
06/09/2022

Meet You Halfway: Explaining Deep Learning Mysteries

Deep neural networks perform exceptionally well on various learning task...
research
08/07/2020

Adversarial Examples on Object Recognition: A Comprehensive Survey

Deep neural networks are at the forefront of machine learning research. ...
research
07/10/2018

Fooling the classifier: Ligand antagonism and adversarial examples

Machine learning algorithms are sensitive to so-called adversarial pertu...
research
09/24/2018

Is Ordered Weighted ℓ_1 Regularized Regression Robust to Adversarial Perturbation? A Case Study on OSCAR

Many state-of-the-art machine learning models such as deep neural networ...

Please sign up or login with your details

Forgot password? Click here to reset