Universal 3-Dimensional Perturbations for Black-Box Attacks on Video Recognition Systems

07/09/2021
by   Shangyu Xie, et al.
0

Widely deployed deep neural network (DNN) models have been proven to be vulnerable to adversarial perturbations in many applications (e.g., image, audio and text classifications). To date, there are only a few adversarial perturbations proposed to deviate the DNN models in video recognition systems by simply injecting 2D perturbations into video frames. However, such attacks may overly perturb the videos without learning the spatio-temporal features (across temporal frames), which are commonly extracted by DNN models for video recognition. To our best knowledge, we propose the first black-box attack framework that generates universal 3-dimensional (U3D) perturbations to subvert a variety of video recognition systems. U3D has many advantages, such as (1) as the transfer-based attack, U3D can universally attack multiple DNN models for video recognition without accessing to the target DNN model; (2) the high transferability of U3D makes such universal black-box attack easy-to-launch, which can be further enhanced by integrating queries over the target model when necessary; (3) U3D ensures human-imperceptibility; (4) U3D can bypass the existing state-of-the-art defense schemes; (5) U3D can be efficiently generated with a few pre-learned parameters, and then immediately injected to attack real-time DNN-based video recognition systems. We have conducted extensive experiments to evaluate U3D on multiple DNN models and three large-scale video datasets. The experimental results demonstrate its superiority and practicality.

READ FULL TEXT

page 1

page 17

research
11/21/2019

Heuristic Black-box Adversarial Attacks on Video Recognition Models

We study the problem of attacking video recognition models in the black-...
research
12/10/2019

Appending Adversarial Frames for Universal Video Attack

There have been many efforts in attacking image classification models wi...
research
04/04/2023

NetFlick: Adversarial Flickering Attacks on Deep Learning Based Video Compression

Video compression plays a significant role in IoT devices for the effici...
research
06/28/2021

Data Poisoning Won't Save You From Facial Recognition

Data poisoning has been proposed as a compelling defense against facial ...
research
08/29/2021

Reinforcement Learning Based Sparse Black-box Adversarial Attack on Video Recognition Models

We explore the black-box adversarial attack on video recognition models....
research
02/23/2023

Boosting Adversarial Transferability using Dynamic Cues

The transferability of adversarial perturbations between image models ha...
research
08/21/2023

Temporal-Distributed Backdoor Attack Against Video Based Action Recognition

Deep neural networks (DNNs) have achieved tremendous success in various ...

Please sign up or login with your details

Forgot password? Click here to reset