Why Neural Machine Translation Prefers Empty Outputs

12/24/2020
by   Xing Shi, et al.
0

We investigate why neural machine translation (NMT) systems assign high probability to empty translations. We find two explanations. First, label smoothing makes correct-length translations less confident, making it easier for the empty translation to finally outscore them. Second, NMT systems use the same, high-frequency EoS word to end all target sentences, regardless of length. This creates an implicit smoothing that increases zero-length translations. Using different EoS types in target sentences of different lengths exposes and eliminates this implicit smoothing.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset