Perception Improvement for Free: Exploring Imperceptible Black-box Adversarial Attacks on Image Classification

10/30/2020
by   Yongwei Wang, et al.
0

Deep neural networks are vulnerable to adversarial attacks. White-box adversarial attacks can fool neural networks with small adversarial perturbations, especially for large size images. However, keeping successful adversarial perturbations imperceptible is especially challenging for transfer-based black-box adversarial attacks. Often such adversarial examples can be easily spotted due to their unpleasantly poor visual qualities, which compromises the threat of adversarial attacks in practice. In this study, to improve the image quality of black-box adversarial examples perceptually, we propose structure-aware adversarial attacks by generating adversarial images based on psychological perceptual models. Specifically, we allow higher perturbations on perceptually insignificant regions, while assigning lower or no perturbation on visually sensitive regions. In addition to the proposed spatial-constrained adversarial perturbations, we also propose a novel structure-aware frequency adversarial attack method in the discrete cosine transform (DCT) domain. Since the proposed attacks are independent of the gradient estimation, they can be directly incorporated with existing gradient-based attacks. Experimental results show that, with the comparable attack success rate (ASR), the proposed methods can produce adversarial examples with considerably improved visual quality for free. With the comparable perceptual quality, the proposed approaches achieve higher attack success rates: particularly for the frequency structure-aware attacks, the average ASR improves more than 10

READ FULL TEXT
research
11/07/2018

CAAD 2018: Iterative Ensemble Adversarial Attack

Deep Neural Networks (DNNs) have recently led to significant improvement...
research
05/15/2023

Attacking Perceptual Similarity Metrics

Perceptual similarity metrics have progressively become more correlated ...
research
03/29/2022

Exploring Frequency Adversarial Attacks for Face Forgery Detection

Various facial manipulation techniques have drawn serious public concern...
research
05/20/2022

Adversarial joint attacks on legged robots

We address adversarial attacks on the actuators at the joints of legged ...
research
04/08/2022

AdvEst: Adversarial Perturbation Estimation to Classify and Detect Adversarial Attacks against Speaker Identification

Adversarial attacks pose a severe security threat to the state-of-the-ar...
research
04/15/2019

Influence of Control Parameters and the Size of Biomedical Image Datasets on the Success of Adversarial Attacks

In this paper, we study dependence of the success rate of adversarial at...
research
07/18/2018

Harmonic Adversarial Attack Method

Adversarial attacks find perturbations that can fool models into misclas...

Please sign up or login with your details

Forgot password? Click here to reset