AdvGAN++ : Harnessing latent layers for adversary generation

08/02/2019
by   Puneet Mangla, et al.
0

Adversarial examples are fabricated examples, indistinguishable from the original image that mislead neural networks and drastically lower their performance. Recently proposed AdvGAN, a GAN based approach, takes input image as a prior for generating adversaries to target a model. In this work, we show how latent features can serve as better priors than input images for adversary generation by proposing AdvGAN++, a version of AdvGAN that achieves higher attack rates than AdvGAN and at the same time generates perceptually realistic images on MNIST and CIFAR-10 datasets.

READ FULL TEXT
research
03/13/2021

Generating Unrestricted Adversarial Examples via Three Parameters

Deep neural networks have been shown to be vulnerable to adversarial exa...
research
05/03/2018

Siamese networks for generating adversarial examples

Machine learning models are vulnerable to adversarial examples. An adver...
research
12/24/2020

Exploring Adversarial Examples via Invertible Neural Networks

Adversarial examples (AEs) are images that can mislead deep neural netwo...
research
03/27/2018

Bypassing Feature Squeezing by Increasing Adversary Strength

Feature Squeezing is a recently proposed defense method which reduces th...
research
07/16/2015

Deep Learning and Music Adversaries

An adversary is essentially an algorithm intent on making a classificati...
research
11/16/2021

Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair

More capable language models increasingly saturate existing task benchma...
research
11/13/2018

Deep Q learning for fooling neural networks

Deep learning models are vulnerable to external attacks. In this paper, ...

Please sign up or login with your details

Forgot password? Click here to reset