Delving into the pixels of adversarial samples

06/21/2021
by   Blerta Lindqvist, et al.
0

Despite extensive research into adversarial attacks, we do not know how adversarial attacks affect image pixels. Knowing how image pixels are affected by adversarial attacks has the potential to lead us to better adversarial defenses. Motivated by instances that we find where strong attacks do not transfer, we delve into adversarial examples at pixel level to scrutinize how adversarial attacks affect image pixel values. We consider several ImageNet architectures, InceptionV3, VGG19 and ResNet50, as well as several strong attacks. We find that attacks can have different effects at pixel level depending on classifier architecture. In particular, input pre-processing plays a previously overlooked role in the effect that attacks have on pixels. Based on the insights of pixel-level examination, we find new ways to detect some of the strongest current attacks.

READ FULL TEXT

page 2

page 7

page 13

research
12/21/2020

Blurring Fools the Network – Adversarial Attacks by Feature Peak Suppression and Gaussian Blurring

Existing pixel-level adversarial attacks on neural networks may be defic...
research
09/06/2018

Are adversarial examples inevitable?

A wide range of defenses have been proposed to harden neural networks ag...
research
12/05/2022

Multiple Perturbation Attack: Attack Pixelwise Under Different ℓ_p-norms For Better Adversarial Performance

Adversarial machine learning has been both a major concern and a hot top...
research
06/18/2021

Analyzing Adversarial Robustness of Deep Neural Networks in Pixel Space: a Semantic Perspective

The vulnerability of deep neural networks to adversarial examples, which...
research
12/31/2022

A Comparative Study of Image Disguising Methods for Confidential Outsourced Learning

Large training data and expensive model tweaking are standard features o...
research
02/15/2021

CAP-GAN: Towards Adversarial Robustness with Cycle-consistent Attentional Purification

Adversarial attack is aimed at fooling the target classifier with imperc...
research
08/04/2023

Multi-attacks: Many images + the same adversarial attack → many target labels

We show that we can easily design a single adversarial perturbation P th...

Please sign up or login with your details

Forgot password? Click here to reset