Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models

by   Na Li, et al.

Learning vectors that capture the meaning of concepts remains a fundamental challenge. Somewhat surprisingly, perhaps, pre-trained language models have thus far only enabled modest improvements to the quality of such concept embeddings. Current strategies for using language models typically represent a concept by averaging the contextualised representations of its mentions in some corpus. This is potentially sub-optimal for at least two reasons. First, contextualised word vectors have an unusual geometry, which hampers downstream tasks. Second, concept embeddings should capture the semantic properties of concepts, whereas contextualised word vectors are also affected by other factors. To address these issues, we propose two contrastive learning strategies, based on the view that whenever two sentences reveal similar properties, the corresponding contextualised vectors should also be similar. One strategy is fully unsupervised, estimating the properties which are expressed in a sentence from the neighbourhood structure of the contextualised word embeddings. The second strategy instead relies on a distant supervision signal from ConceptNet. Our experimental results show that the resulting vectors substantially outperform existing concept embeddings in predicting the semantic properties of concepts, with the ConceptNet-based strategy achieving the best results. These findings are furthermore confirmed in a clustering task and in the downstream task of ontology completion.


page 1

page 2

page 3

page 4


Deriving Word Vectors from Contextualized Language Models using Topic-Aware Mention Selection

One of the long-standing challenges in lexical semantics consists in lea...

Modelling General Properties of Nouns by Selectively Averaging Contextualised Embeddings

While the success of pre-trained language models has largely eliminated ...

Leveraging multilingual transfer for unsupervised semantic acoustic word embeddings

Acoustic word embeddings (AWEs) are fixed-dimensional vector representat...

Modelling Commonsense Properties using Pre-Trained Bi-Encoders

Grasping the commonsense properties of everyday concepts is an important...

Improved Biomedical Word Embeddings in the Transformer Era

Biomedical word embeddings are usually pre-trained on free text corpora ...

Learning to Predict Concept Ordering for Common Sense Generation

Prior work has shown that the ordering in which concepts are shown to a ...

A Hybrid Approach to Measure Semantic Relatedness in Biomedical Concepts

Objective: This work aimed to demonstrate the effectiveness of a hybrid ...

Please sign up or login with your details

Forgot password? Click here to reset