Learning to Remove: Towards Isotropic Pre-trained BERT Embedding

by   Yuxin Liang, et al.

Pre-trained language models such as BERT have become a more common choice of natural language processing (NLP) tasks. Research in word representation shows that isotropic embeddings can significantly improve performance on downstream tasks. However, we measure and analyze the geometry of pre-trained BERT embedding and find that it is far from isotropic. We find that the word vectors are not centered around the origin, and the average cosine similarity between two random words is much higher than zero, which indicates that the word vectors are distributed in a narrow cone and deteriorate the representation capacity of word embedding. We propose a simple, and yet effective method to fix this problem: remove several dominant directions of BERT embedding with a set of learnable weights. We train the weights on word similarity tasks and show that processed embedding is more isotropic. Our method is evaluated on three standardized tasks: word similarity, word analogy, and semantic textual similarity. In all tasks, the word embedding processed by our method consistently outperforms the original embedding (with average improvement of 13 methods. Our method is also proven to be more robust to changes of hyperparameter.


Subspace-based Set Operations on a Pre-trained Word Embedding Space

Word embedding is a fundamental technology in natural language processin...

Enriching Rare Word Representations in Neural Language Models by Embedding Matrix Augmentation

The neural language models (NLM) achieve strong generalization capabilit...

Modelling General Properties of Nouns by Selectively Averaging Contextualised Embeddings

While the success of pre-trained language models has largely eliminated ...

emojiSpace: Spatial Representation of Emojis

In the absence of nonverbal cues during messaging communication, users e...

BI-RADS BERT Using Section Tokenization to Understand Radiology Reports

Radiology reports are the main form of communication between radiologist...

All-but-the-Top: Simple and Effective Postprocessing for Word Representations

Real-valued word representations have transformed NLP applications, popu...

Please sign up or login with your details

Forgot password? Click here to reset