Adv-watermark: A Novel Watermark Perturbation for Adversarial Examples

08/05/2020
by   Xiaojun Jia, et al.
0

Recent research has demonstrated that adding some imperceptible perturbations to original images can fool deep learning models. However, the current adversarial perturbations are usually shown in the form of noises, and thus have no practical meaning. Image watermark is a technique widely used for copyright protection. We can regard image watermark as a king of meaningful noises and adding it to the original image will not affect people's understanding of the image content, and will not arouse people's suspicion. Therefore, it will be interesting to generate adversarial examples using watermarks. In this paper, we propose a novel watermark perturbation for adversarial examples (Adv-watermark) which combines image watermarking techniques and adversarial example algorithms. Adding a meaningful watermark to the clean images can attack the DNN models. Specifically, we propose a novel optimization algorithm, which is called Basin Hopping Evolution (BHE), to generate adversarial watermarks in the black-box attack mode. Thanks to the BHE, Adv-watermark only requires a few queries from the threat models to finish the attacks. A series of experiments conducted on ImageNet and CASIA-WebFace datasets show that the proposed method can efficiently generate adversarial examples, and outperforms the state-of-the-art attack methods. Moreover, Adv-watermark is more robust against image transformation defense methods.

READ FULL TEXT

page 2

page 4

page 7

page 8

research
11/30/2018

ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples

Deep neural networks (DNNs) have been demonstrated to be vulnerable to a...
research
12/01/2018

FineFool: Fine Object Contour Attack via Attention

Machine learning models have been shown vulnerable to adversarial attack...
research
08/25/2022

Semantic Preserving Adversarial Attack Generation with Autoencoder and Genetic Algorithm

Widely used deep learning models are found to have poor robustness. Litt...
research
08/14/2020

Efficiently Constructing Adversarial Examples by Feature Watermarking

With the increasing attentions of deep learning models, attacks are also...
research
02/28/2020

Applying Tensor Decomposition to image for Robustness against Adversarial Attack

Nowadays the deep learning technology is growing faster and shows dramat...
research
01/01/2020

Erase and Restore: Simple, Accurate and Resilient Detection of L_2 Adversarial Examples

By adding carefully crafted perturbations to input images, adversarial e...
research
08/25/2021

Improving Visual Quality of Unrestricted Adversarial Examples with Wavelet-VAE

Traditional adversarial examples are typically generated by adding pertu...

Please sign up or login with your details

Forgot password? Click here to reset