Generating Natural Language Adversarial Examples through An Improved Beam Search Algorithm

10/15/2021
by   Tengfei Zhao, et al.
0

The research of adversarial attacks in the text domain attracts many interests in the last few years, and many methods with a high attack success rate have been proposed. However, these attack methods are inefficient as they require lots of queries for the victim model when crafting text adversarial examples. In this paper, a novel attack model is proposed, its attack success rate surpasses the benchmark attack methods, but more importantly, its attack efficiency is much higher than the benchmark attack methods. The novel method is empirically evaluated by attacking WordCNN, LSTM, BiLSTM, and BERT on four benchmark datasets. For instance, it achieves a 100% attack success rate higher than the state-of-the-art method when attacking BERT and BiLSTM on IMDB, but the number of queries for the victim models only is 1/4 and 1/6.5 of the state-of-the-art method, respectively. Also, further experiments show the novel method has a good transferability on the generated adversarial examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2023

BeamAttack: Generating High-quality Textual Adversarial Examples through Beam Search and Mixed Semantic Spaces

Natural language processing models based on neural networks are vulnerab...
research
12/01/2020

Improving the Transferability of Adversarial Examples with the Adam Optimizer

Convolutional neural networks have outperformed humans in image recognit...
research
02/22/2020

Temporal Sparse Adversarial Attack on Gait Recognition

Gait recognition has a broad application in social security due to its a...
research
06/07/2023

PromptAttack: Probing Dialogue State Trackers with Adversarial Prompts

A key component of modern conversational systems is the Dialogue State T...
research
03/01/2023

Frauds Bargain Attack: Generating Adversarial Text Samples via Word Manipulation Process

Recent studies on adversarial examples expose vulnerabilities of natural...
research
06/20/2022

Diversified Adversarial Attacks based on Conjugate Gradient Method

Deep learning models are vulnerable to adversarial examples, and adversa...
research
12/26/2018

Practical Adversarial Attack Against Object Detector

In this paper, we proposed the first practical adversarial attacks again...

Please sign up or login with your details

Forgot password? Click here to reset