Semantic Adversarial Examples

03/16/2018
by   Hossein Hosseini, et al.
0

Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error. Such images, however, contain artificial perturbations that make them somewhat distinguishable from natural images. This property is used by several defense methods to counter adversarial examples by applying denoising filters or training the model to be robust to small perturbations. In this paper, we introduce a new class of adversarial examples, namely "Semantic Adversarial Examples," as images that are arbitrarily perturbed to fool the model, but in such a way that the modified image semantically represents the same object as the original image. We formulate the problem of generating such images as a constrained optimization problem and develop an adversarial transformation based on the shape bias property of human cognitive system. In our method, we generate adversarial images by first converting the RGB image into the HSV (Hue, Saturation and Value) color space and then randomly shifting the Hue and Saturation components, while keeping the Value component the same. Our experimental results on CIFAR10 dataset show that the accuracy of VGG16 network on adversarial color-shifted images is 5.7

READ FULL TEXT

page 1

page 3

page 5

research
11/28/2022

Imperceptible Adversarial Attack via Invertible Neural Networks

Adding perturbations via utilizing auxiliary gradient information or dis...
research
01/29/2020

Semantic Adversarial Perturbations using Learnt Representations

Adversarial examples for image classifiers are typically created by sear...
research
07/03/2019

Robust Synthesis of Adversarial Visual Examples Using a Deep Image Prior

We present a novel method for generating robust adversarial image exampl...
research
11/16/2015

Adversarial Manipulation of Deep Representations

We show that the representation of an image in a deep neural network (DN...
research
11/07/2022

Are AlphaZero-like Agents Robust to Adversarial Perturbations?

The success of AlphaZero (AZ) has demonstrated that neural-network-based...
research
04/21/2018

Generating Natural Language Adversarial Examples

Deep neural networks (DNNs) are vulnerable to adversarial examples, pert...
research
08/31/2022

Unrestricted Adversarial Samples Based on Non-semantic Feature Clusters Substitution

Most current methods generate adversarial examples with the L_p norm spe...

Please sign up or login with your details

Forgot password? Click here to reset