SimCLAD: A Simple Framework for Contrastive Learning of Acronym Disambiguation

11/29/2021
by   Bin Li, et al.
0

Acronym disambiguation means finding the correct meaning of an ambiguous acronym from the dictionary in a given sentence, which is one of the key points for scientific document understanding (SDU@AAAI-22). Recently, many attempts have tried to solve this problem via fine-tuning the pre-trained masked language models (MLMs) in order to obtain a better acronym representation. However, the acronym meaning is varied under different contexts, whose corresponding phrase representation mapped in different directions lacks discrimination in the entire vector space. Thus, the original representations of the pre-trained MLMs are not ideal for the acronym disambiguation task. In this paper, we propose a Simple framework for Contrastive Learning of Acronym Disambiguation (SimCLAD) method to better understand the acronym meanings. Specifically, we design a continual contrastive pre-training method that enhances the pre-trained model's generalization ability by learning the phrase-level contrastive distributions between true meaning and ambiguous phrases. The results on the acronym disambiguation of the scientific domain in English show that the proposed method outperforms all other competitive state-of-the-art (SOTA) methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/20/2022

Generative or Contrastive? Phrase Reconstruction for Better Sentence Representation Learning

Though offering amazing contextualized token-level representations, curr...
research
12/31/2020

CLEAR: Contrastive Learning for Sentence Representation

Pre-trained language models have proven their unique powers in capturing...
research
12/30/2020

ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning

Pre-trained Language Models (PLMs) have shown strong performance in vari...
research
05/26/2021

Unsupervised Pronoun Resolution via Masked Noun-Phrase Prediction

In this work, we propose Masked Noun-Phrase Prediction (MNPP), a pre-tra...
research
03/14/2022

Deep Continuous Prompt for Contrastive Learning of Sentence Embeddings

The performance of sentence representation has been remarkably improved ...
research
02/27/2022

UCTopic: Unsupervised Contrastive Learning for Phrase Representations and Topic Mining

High-quality phrase representations are essential to finding topics and ...
research
09/28/2022

Supervised Contrastive Learning as Multi-Objective Optimization for Fine-Tuning Large Pre-trained Language Models

Recently, Supervised Contrastive Learning (SCL) has been shown to achiev...

Please sign up or login with your details

Forgot password? Click here to reset