Improving Neural Machine Translation with Parent-Scaled Self-Attention

09/06/2019
by   Emanuele Bugliarello, et al.
0

Most neural machine translation (NMT) models operate on source and target sentences, treating them as sequences of words and neglecting their syntactic structure. Recent studies have shown that embedding the syntax information of a source sentence in recurrent neural networks can improve their translation accuracy, especially for low-resource language pairs. However, state-of-the-art NMT models are based on self-attention networks (e.g., Transformer), in which it is still not clear how to best embed syntactic information. In this work, we explore different approaches to make such models syntactically aware. Moreover, we propose a novel method to incorporate syntactic information in the self-attention mechanism of the Transformer encoder by introducing attention heads that can attend to the dependency parent of each token. The proposed model is simple yet effective, requiring no additional parameter and improving the translation quality of the Transformer model especially for long sentences and low-resource scenarios. We show the efficacy of the proposed approach on NC11 English-German, WMT16 and WMT17 English-German, WMT18 English-Turkish, and WAT English-Japanese translation tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/08/2020

Explicit Reordering for Neural Machine Translation

In Transformer-based neural machine translation (NMT), the positional en...
research
11/23/2021

Boosting Neural Machine Translation with Dependency-Scaled Self-Attention Network

The neural machine translation model assumes that syntax knowledge can b...
research
05/16/2019

Joint Source-Target Self Attention with Locality Constraints

The dominant neural machine translation models are based on the encoder-...
research
10/13/2021

Semantics-aware Attention Improves Neural Machine Translation

The integration of syntactic structures into Transformer machine transla...
research
09/05/2019

Source Dependency-Aware Transformer with Supervised Self-Attention

Recently, Transformer has achieved the state-of-the-art performance on m...
research
11/01/2018

Hybrid Self-Attention Network for Machine Translation

The encoder-decoder is the typical framework for Neural Machine Translat...
research
04/05/2019

Modeling Recurrence for Transformer

Recently, the Transformer model that is based solely on attention mechan...

Please sign up or login with your details

Forgot password? Click here to reset