Word-Level Representation From Bytes For Language Modeling

11/23/2022
by   Chu-Tak Lee, et al.
0

Modern language models mostly take sub-words as input, a design that balances the trade-off between vocabulary size, number of parameters, and performance. However, sub-word tokenization still has disadvantages like not being robust to noise and difficult to generalize to new languages. Also, the current trend of scaling up models reveals that larger models require larger embeddings but that makes parallelization hard. Previous work on image classification proves splitting raw input into a sequence of chucks is a strong, model-agnostic inductive bias. Based on this observation, we rethink the existing character-aware method that takes character-level inputs but makes word-level sequence modeling and prediction. We overhaul this method by introducing a cross-attention network that builds word-level representation directly from bytes, and a sub-word level prediction based on word-level hidden states to avoid the time and space requirement of word-level prediction. With these two improvements combined, we have a token free model with slim input embeddings for downstream tasks. We name our method Byte2Word and perform evaluations on language modeling and text classification. Experiments show that Byte2Word is on par with the strong sub-word baseline BERT but only takes up 10% of embedding size. We further test our method on synthetic noise and cross-lingual transfer and find it competitive to baseline methods on both settings.

READ FULL TEXT
research
04/10/2017

Character-Word LSTM Language Models

We present a Character-Word Long Short-Term Memory Language Model which ...
research
05/23/2023

From Characters to Words: Hierarchical Pre-trained Language Model for Open-vocabulary Language Understanding

Current state-of-the-art models for natural language understanding requi...
research
08/18/2017

Syllable-level Neural Language Model for Agglutinative Language

Language models for agglutinative languages have always been hindered in...
research
05/26/2023

Tokenization Impacts Multilingual Language Modeling: Assessing Vocabulary Allocation and Overlap Across Languages

Multilingual language models have recently gained attention as a promisi...
research
11/04/2019

Emerging Cross-lingual Structure in Pretrained Language Models

We study the problem of multilingual masked language modeling, i.e. the ...
research
09/24/2020

Grounded Compositional Outputs for Adaptive Language Modeling

Language models have emerged as a central component across NLP, and a gr...
research
07/06/2017

An Embedded Deep Learning based Word Prediction

Recent developments in deep learning with application to language modeli...

Please sign up or login with your details

Forgot password? Click here to reset