Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy

07/14/2023
by   Zihao Zhu, et al.
0

Data-poisoning based backdoor attacks aim to insert backdoor into models by manipulating training datasets without controlling the training process of the target model. Existing attack methods mainly focus on designing triggers or fusion strategies between triggers and benign samples. However, they often randomly select samples to be poisoned, disregarding the varying importance of each poisoning sample in terms of backdoor injection. A recent selection strategy filters a fixed-size poisoning sample pool by recording forgetting events, but it fails to consider the remaining samples outside the pool from a global perspective. Moreover, computing forgetting events requires significant additional computing resources. Therefore, how to efficiently and effectively select poisoning samples from the entire dataset is an urgent problem in backdoor attacks.To address it, firstly, we introduce a poisoning mask into the regular backdoor training loss. We suppose that a backdoored model training with hard poisoning samples has a more backdoor effect on easy ones, which can be implemented by hindering the normal training process (, maximizing loss mask). To further integrate it with normal training process, we then propose a learnable poisoning sample selection strategy to learn the mask together with the model parameters through a min-max optimization.Specifically, the outer loop aims to achieve the backdoor attack goal by minimizing the loss based on the selected samples, while the inner loop selects hard poisoning samples that impede this goal by maximizing the loss. After several rounds of adversarial training, we finally select effective poisoning samples with high contribution. Extensive experiments on benchmark datasets demonstrate the effectiveness and efficiency of our approach in boosting backdoor attack performance.

READ FULL TEXT

page 3

page 9

research
05/24/2023

Sharpness-Aware Data Poisoning Attack

Recent research has highlighted the vulnerability of Deep Neural Network...
research
11/26/2021

Active Learning for Event Extraction with Memory-based Loss Prediction Model

Event extraction (EE) plays an important role in many industrial applica...
research
02/25/2023

SATBA: An Invisible Backdoor Attack Based On Spatial Attention

As a new realm of AI security, backdoor attack has drew growing attentio...
research
06/14/2023

A Proxy-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks

Poisoning efficiency is a crucial factor in poisoning-based backdoor att...
research
06/14/2022

Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt

Training on web-scale data can take months. But most computation and tim...
research
04/22/2022

Data-Efficient Backdoor Attacks

Recent studies have proven that deep neural networks are vulnerable to b...
research
12/10/2020

One for More: Selecting Generalizable Samples for Generalizable ReID Model

Current training objectives of existing person Re-IDentification (ReID) ...

Please sign up or login with your details

Forgot password? Click here to reset