Accurate, reliable and fast robustness evaluation

07/01/2019
by   Wieland Brendel, et al.
3

Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturbations has moved from a peculiar phenomenon to a core issue in Deep Learning. Despite much attention, however, progress towards more robust models is significantly impaired by the difficulty of evaluating the robustness of neural network models. Today's methods are either fast but brittle (gradient-based attacks), or they are fairly reliable but slow (score- and decision-based attacks). We here develop a new set of gradient-based adversarial attacks which (a) are more reliable in the face of gradient-masking than other gradient-based attacks, (b) perform better and are more query efficient than current state-of-the-art gradient-based attacks, (c) can be flexibly adapted to a wide range of adversarial criteria and (d) require virtually no hyperparameter tuning. These findings are carefully validated across a diverse set of six different models and hold for L2 and L_infinity in both targeted as well as untargeted scenarios. Implementations will be made available in all major toolboxes (Foolbox, CleverHans and ART). Furthermore, we will soon add additional content and experiments, including L0 and L1 versions of our attack as well as additional comparisons to other L2 and L_infinity attacks. We hope that this class of attacks will make robustness evaluations easier and more reliable, thus contributing to more signal in the search for more robust machine learning models.

READ FULL TEXT

page 6

page 8

research
12/12/2017

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

Many machine learning algorithms are vulnerable to almost imperceptible ...
research
04/22/2021

Performance Evaluation of Adversarial Attacks: Discrepancies and Solutions

Recently, adversarial attack methods have been developed to challenge th...
research
11/18/2020

Contextual Fusion For Adversarial Robustness

Mammalian brains handle complex reasoning tasks in a gestalt manner by i...
research
04/05/2017

Comment on "Biologically inspired protection of deep networks from adversarial attacks"

A recent paper suggests that Deep Neural Networks can be protected from ...
research
10/02/2022

Optimization for Robustness Evaluation beyond ℓ_p Metrics

Empirical evaluation of deep learning models against adversarial attacks...
research
12/07/2018

Combatting Adversarial Attacks through Denoising and Dimensionality Reduction: A Cascaded Autoencoder Approach

Machine Learning models are vulnerable to adversarial attacks that rely ...
research
05/27/2020

Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques

Deep Neural Networks (DNNs) are well-known to be vulnerable to Adversari...

Please sign up or login with your details

Forgot password? Click here to reset