Improve Adversarial Robustness via Weight Penalization on Classification Layer

10/08/2020
by   Cong Xu, et al.
1

It is well-known that deep neural networks are vulnerable to adversarial attacks. Recent studies show that well-designed classification parts can lead to better robustness. However, there is still much space for improvement along this line. In this paper, we first prove that, from a geometric point of view, the robustness of a neural network is equivalent to some angular margin condition of the classifier weights. We then explain why ReLU type function is not a good choice for activation under this framework. These findings reveal the limitations of the existing approaches and lead us to develop a novel light-weight-penalized defensive method, which is simple and has a good scalability. Empirical results on multiple benchmark datasets demonstrate that our method can effectively improve the robustness of the network without requiring too much additional computation, while maintaining a high classification precision for clean data.

READ FULL TEXT
research
04/07/2019

JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks

It has been demonstrated that very simple attacks can fool highly-sophis...
research
05/02/2019

Weight Map Layer for Noise and Adversarial Attack Robustness

Convolutional neural networks (CNNs) are known for their good performanc...
research
03/15/2023

Improving Adversarial Robustness with Hypersphere Embedding and Angular-based Regularizations

Adversarial training (AT) methods have been found to be effective agains...
research
05/19/2020

Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural Networks

Convolutional neural network (CNN) has surpassed traditional methods for...
research
09/21/2021

Modelling Adversarial Noise for Adversarial Defense

Deep neural networks have been demonstrated to be vulnerable to adversar...
research
05/31/2021

Adaptive Feature Alignment for Adversarial Training

Recent studies reveal that Convolutional Neural Networks (CNNs) are typi...
research
06/19/2020

A general framework for defining and optimizing robustness

Robustness of neural networks has recently attracted a great amount of i...

Please sign up or login with your details

Forgot password? Click here to reset