Type I Attack for Generative Models

03/04/2020
by   Chengjin Sun, et al.
0

Generative models are popular tools with a wide range of applications. Nevertheless, it is as vulnerable to adversarial samples as classifiers. The existing attack methods mainly focus on generating adversarial examples by adding imperceptible perturbations to input, which leads to wrong result. However, we focus on another aspect of attack, i.e., cheating models by significant changes. The former induces Type II error and the latter causes Type I error. In this paper, we propose Type I attack to generative models such as VAE and GAN. One example given in VAE is that we can change an original image significantly to a meaningless one but their reconstruction results are similar. To implement the Type I attack, we destroy the original one by increasing the distance in input space while keeping the output similar because different inputs may correspond to similar features for the property of deep neural network. Experimental results show that our attack method is effective to generate Type I adversarial examples for generative models on large-scale image datasets.

READ FULL TEXT

page 3

page 4

research
02/22/2017

Adversarial examples for generative models

We explore methods of producing adversarial examples on deep generative ...
research
03/18/2019

Generating Adversarial Examples With Conditional Generative Adversarial Net

Recently, deep neural networks have significant progress and successful ...
research
09/01/2023

Image Hijacks: Adversarial Images can Control Generative Models at Runtime

Are foundation models secure from malicious actors? In this work, we foc...
research
01/24/2022

Hiding Behind Backdoors: Self-Obfuscation Against Generative Models

Attack vectors that compromise machine learning pipelines in the physica...
research
06/09/2020

Low Distortion Block-Resampling with Spatially Stochastic Networks

We formalize and attack the problem of generating new images from old on...
research
10/14/2019

Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models

Deep Neural Networks (DNNs) are vulnerable to deliberately crafted adver...
research
06/19/2021

A Stealthy and Robust Fingerprinting Scheme for Generative Models

This paper presents a novel fingerprinting methodology for the Intellect...

Please sign up or login with your details

Forgot password? Click here to reset