A temporal chrominance trigger for clean-label backdoor attack against anti-spoof rebroadcast detection
We propose a stealthy clean-label video backdoor attack against Deep Learning (DL)-based models aiming at detecting a particular class of spoofing attacks, namely video rebroadcast attacks. The injected backdoor does not affect spoofing detection in normal conditions, but induces a misclassification in the presence of a specific triggering signal. The proposed backdoor relies on a temporal trigger altering the average chrominance of the video sequence. The backdoor signal is designed by taking into account the peculiarities of the Human Visual System (HVS) to reduce the visibility of the trigger, thus increasing the stealthiness of the backdoor. To force the network to look at the presence of the trigger in the challenging clean-label scenario, we choose the poisoned samples used for the injection of the backdoor following a so-called Outlier Poisoning Strategy (OPS). According to OPS, the triggering signal is inserted in the training samples that the network finds more difficult to classify. The effectiveness of the proposed backdoor attack and its generality are validated experimentally on different datasets and anti-spoofing rebroadcast detection architectures.
READ FULL TEXT