On Pruning Adversarially Robust Neural Networks

02/24/2020
by   Vikash Sehwag, et al.
0

In safety-critical but computationally resource-constrained applications, deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size (often millions of parameters). While the research community has extensively explored the use of robust training and network pruning independently to address one of these challenges, we show that integrating existing pruning techniques with multiple types of robust training techniques, including verifiably robust training, leads to poor robust accuracy even though such techniques can preserve high regular accuracy. We further demonstrate that making pruning techniques aware of the robust learning objective can lead to a large improvement in performance. We realize this insight by formulating the pruning objective as an empirical risk minimization problem which is then solved using SGD. We demonstrate the success of the proposed pruning technique across CIFAR-10, SVHN, and ImageNet dataset with four different robust training techniques: iterative adversarial training, randomized smoothing, MixTrain, and CROWN-IBP. Specifically, at 99% connection pruning ratio, we achieve gains up to 3.2, 10.0, and 17.8 percentage points in robust accuracy under state-of-the-art adversarial attacks for ImageNet, CIFAR-10, and SVHN dataset, respectively. Our code and compressed networks are publicly available at https://github.com/inspire-group/compactness-robustness

READ FULL TEXT

page 17

page 18

research
10/09/2022

Pruning Adversarially Robust Neural Networks without Adversarial Examples

Adversarial pruning compresses models while preserving robustness. Curre...
research
12/05/2019

The Search for Sparse, Robust Neural Networks

Recent work on deep neural network pruning has shown there exist sparse ...
research
11/03/2020

A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNs

This paper presents a dynamic network rewiring (DNR) method to generate ...
research
03/04/2021

Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy

Neural network pruning is a popular technique used to reduce the inferen...
research
06/07/2023

CFDP: Common Frequency Domain Pruning

As the saying goes, sometimes less is more – and when it comes to neural...
research
08/30/2023

Robust Principles: Architectural Design Principles for Adversarially Robust CNNs

Our research aims to unify existing works' diverging opinions on how arc...
research
07/12/2020

Probabilistic Jacobian-based Saliency Maps Attacks

Machine learning models have achieved spectacular performances in variou...

Please sign up or login with your details

Forgot password? Click here to reset