Adversarial Attacks for Multi-view Deep Models

06/19/2020
by   Xuli Sun, et al.
0

Recent work has highlighted the vulnerability of many deep machine learning models to adversarial examples. It attracts increasing attention to adversarial attacks, which can be used to evaluate the security and robustness of models before they are deployed. However, to our best knowledge, there is no specific research on the adversarial attacks for multi-view deep models. This paper proposes two multi-view attack strategies, two-stage attack (TSA) and end-to-end attack (ETEA). With the mild assumption that the single-view model on which the target multi-view model is based is known, we first propose the TSA strategy. The main idea of TSA is to attack the multi-view model with adversarial examples generated by attacking the associated single-view model, by which state-of-the-art single-view attack methods are directly extended to the multi-view scenario. Then we further propose the ETEA strategy when the multi-view model is provided publicly. The ETEA is applied to accomplish direct attacks on the target multi-view model, where we develop three effective multi-view attack methods. Finally, based on the fact that adversarial examples generalize well among different models, this paper takes the adversarial attack on the multi-view convolutional neural network as an example to validate that the effectiveness of the proposed multi-view attacks. Extensive experimental results demonstrate that our multi-view attack strategies are capable of attacking the multi-view deep models, and we additionally find that multi-view models are more robust than single-view models.

READ FULL TEXT
research
06/29/2018

Adversarial Examples in Deep Learning: Characterization and Divergence

The burgeoning success of deep learning has raised the security and priv...
research
08/19/2023

Black-box Adversarial Attacks against Dense Retrieval Models: A Multi-view Contrastive Learning Method

Neural ranking models (NRMs) and dense retrieval (DR) models have given ...
research
06/05/2019

Multi-way Encoding for Robustness

Deep models are state-of-the-art for many computer vision tasks includin...
research
10/10/2021

Adversarial Attacks in a Multi-view Setting: An Empirical Study of the Adversarial Patches Inter-view Transferability

While machine learning applications are getting mainstream owing to a de...
research
09/25/2021

Two Souls in an Adversarial Image: Towards Universal Adversarial Example Detection using Multi-view Inconsistency

In the evasion attacks against deep neural networks (DNN), the attacker ...
research
06/02/2023

Towards Robust GAN-generated Image Detection: a Multi-view Completion Representation

GAN-generated image detection now becomes the first line of defense agai...
research
02/13/2018

Identify Susceptible Locations in Medical Records via Adversarial Attacks on Deep Predictive Models

The surging availability of electronic medical records (EHR) leads to in...

Please sign up or login with your details

Forgot password? Click here to reset