Generative Poisoning Attack Method Against Neural Networks

03/03/2017
by   Chaofei Yang, et al.
0

Poisoning attack is identified as a severe security threat to machine learning algorithms. In many applications, for example, deep neural network (DNN) models collect public data as the inputs to perform re-training, where the input data can be poisoned. Although poisoning attack against support vector machines (SVM) has been extensively studied before, there is still very limited knowledge about how such attack can be implemented on neural networks (NN), especially DNNs. In this work, we first examine the possibility of applying traditional gradient-based method (named as the direct gradient method) to generate poisoned data against NNs by leveraging the gradient of the target model w.r.t. the normal data. We then propose a generative method to accelerate the generation rate of the poisoned data: an auto-encoder (generator) used to generate poisoned data is updated by a reward function of the loss, and the target NN model (discriminator) receives the poisoned data to calculate the loss w.r.t. the normal data. Our experiment results show that the generative method can speed up the poisoned data generation rate by up to 239.38x compared with the direct gradient method, with slightly lower model accuracy degradation. A countermeasure is also designed to detect such poisoning attack methods by checking the loss of the target model.

READ FULL TEXT
research
08/06/2019

Random Directional Attack for Fooling Deep Neural Networks

Deep neural networks (DNNs) have been widely used in many fields such as...
research
07/19/2021

MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI

The advance of explainable artificial intelligence, which provides reaso...
research
09/21/2020

ES Attack: Model Stealing against Deep Neural Networks without Data Hurdles

Deep neural networks (DNNs) have become the essential components for var...
research
07/20/2021

Discriminator-Free Generative Adversarial Attack

The Deep Neural Networks are vulnerable toadversarial exam-ples(Figure 1...
research
07/31/2020

Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases

When the training data are maliciously tampered, the predictions of the ...
research
06/03/2018

Minnorm training: an algorithm for training overcomplete deep neural networks

In this work, we propose a new training method for finding minimum weigh...
research
02/09/2023

Imperceptible Sample-Specific Backdoor to DNN with Denoising Autoencoder

The backdoor attack poses a new security threat to deep neural networks....

Please sign up or login with your details

Forgot password? Click here to reset