SIFTER: A Task-specific Alignment Strategy for Enhancing Sentence Embeddings
The paradigm of pre-training followed by fine-tuning on downstream tasks has become the mainstream method in natural language processing tasks. Although pre-trained models have the advantage of generalization, their performance may still vary significantly across different domain tasks. This is because the data distribution in different domains varies. For example, the different parts of the sentence 'He married Smt. Dipali Ghosh in 1947 and led a very happy married life' may have different impact for downstream tasks. For similarity calculations, words such as 'led' and 'life' are more important. On the other hand, for sentiment analysis, the word 'happy' is crucial. This indicates that different downstream tasks have different levels of sensitivity to sentence components. Our starting point is to scale information of the model and data according to the specifics of downstream tasks, enhancing domain information of relevant parts for these tasks and reducing irrelevant elements for different domain tasks, called SIFTER. In the experimental part, we use the SIFTER to improve SimCSE by constructing positive sample pairs based on enhancing the sentence stem and reducing the unimportant components in the sentence, and maximize the similarity between three sentences. Similarly, SIFTER can improve the gate mechanism of the LSTM model by short-circuiting the input gate of important words so that the LSTM model remembers the important parts of the sentence. Our experiments demonstrate that SIFTER outperforms the SimCSE and LSTM baselines.
READ FULL TEXT