Invariance-based Adversarial Attack on Neural Machine Translation Systems

08/03/2019
by   Akshay Chaturvedi, et al.
3

Recently, NLP models have been shown to be susceptible to adversarial attacks. In this paper, we explore adversarial attacks on neural machine translation (NMT) systems. Given a sentence in the source language, the goal of the proposed attack is to change multiple words while ensuring that the predicted translation remains unchanged. In order to choose the word from the source vocabulary, we propose a soft-attention based technique. The experiments are conducted on two language pairs: English-German (en-de) and English-French (en-fr) and two state-of-the-art NMT systems: BLSTM-based encoder-decoder with attention and Transformer. The proposed soft-attention based technique outperforms existing methods like HotFlip by a significant margin for all the conducted experiments The results demonstrate that state-of-the-art NMT systems are unable to capture the semantics of the source language.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset