Bridging Source and Target Word Embeddings for Neural Machine Translation
Neural machine translation systems encode a source sequence into a vector from which a target sequence is generated via a decoder. Different from the traditional statistical machine translation, source and target words are not directly mapped to each other in translation rules. They are at the two ends of a long information channel in the encoder-decoder neural network, separated by source and target hidden states. This may lead to translations with inconceivable word alignments. In this paper, we try to bridge source and target word embeddings so as to shorten their distance. We propose three strategies to bridge them: 1) a source state bridging model that moves source word embeddings one step closer to their counterparts, 2) a target state bridging model that explores relevant source word embeddings for target state prediction, and 3) a direct link bridging model that directly connects source and target word embeddings so as to minimize their discrepancy. Experiments and analysis demonstrate that the proposed bridging models are able to significantly improve quality of both translation and word alignments.
READ FULL TEXT