Improving Adversarial Transferability with Gradient Refining

05/11/2021
by   Guoqiu Wang, et al.
0

Deep neural networks are vulnerable to adversarial examples, which are crafted by adding human-imperceptible perturbations to original images. Most existing adversarial attack methods achieve nearly 100 under the white-box setting, but only achieve relatively low attack success rates under the black-box setting. To improve the transferability of adversarial examples for the black-box setting, several methods have been proposed, e.g., input diversity, translation-invariant attack, and momentum-based attack. In this paper, we propose a method named Gradient Refining, which can further improve the adversarial transferability by correcting useless gradients introduced by input diversity through multiple transformations. Our method is generally applicable to many gradient-based attack methods combined with input diversity. Extensive experiments are conducted on the ImageNet dataset and our method can achieve an average transfer success rate of 82.07 setting, which outperforms the other state-of-the-art methods by a large margin of 6.0 CVPR 2021 Unrestricted Adversarial Attacks on ImageNet organized by Alibaba and won the second place in attack success rates among 1558 teams.

READ FULL TEXT
research
03/19/2018

Improving Transferability of Adversarial Examples with Input Diversity

Though convolutional neural networks have achieved state-of-the-art perf...
research
04/05/2019

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

Deep neural networks are vulnerable to adversarial examples, which can m...
research
08/09/2021

Meta Gradient Adversarial Attack

In recent years, research on adversarial attacks has become a hot spot. ...
research
04/06/2022

Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks

Deep neural networks have shown to be very vulnerable to adversarial exa...
research
07/05/2021

Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks

Transfer-based adversarial attacks can effectively evaluate model robust...
research
03/25/2022

Improving Adversarial Transferability with Spatial Momentum

Deep Neural Networks (DNN) are vulnerable to adversarial examples. Altho...
research
11/21/2022

Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization

Deep neural networks are vulnerable to adversarial examples, which attac...

Please sign up or login with your details

Forgot password? Click here to reset