Assessing the Importance of Frequency versus Compositionality for Subword-based Tokenization in NMT

06/02/2023
by   Benoist Wolleb, et al.
0

Subword tokenization is the de facto standard for tokenization in neural language models and machine translation systems. Three advantages are frequently cited in favor of subwords: shorter encoding of frequent tokens, compositionality of subwords, and ability to deal with unknown words. As their relative importance is not entirely clear yet, we propose a tokenization approach that enables us to separate frequency (the first advantage) from compositionality. The approach uses Huffman coding to tokenize words, by order of frequency, using a fixed amount of symbols. Experiments with CS-DE, EN-FR and EN-DE NMT show that frequency alone accounts for 90 reached by BPE, hence compositionality has less importance than previously thought.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset