Semantic-aware Representation Learning Via Probability Contrastive Loss

11/11/2021
by   Junjie Li, et al.
0

Recent feature contrastive learning (FCL) has shown promising performance in unsupervised representation learning. For the close-set representation learning where labeled data and unlabeled data belong to the same semantic space, however, FCL cannot show overwhelming gains due to not involving the class semantics during optimization. Consequently, the produced features do not guarantee to be easily classified by the class weights learned from labeled data although they are information-rich. To tackle this issue, we propose a novel probability contrastive learning (PCL) in this paper, which not only produces rich features but also enforces them to be distributed around the class prototypes. Specifically, we propose to use the output probabilities after softmax to perform contrastive learning instead of the extracted features in FCL. Evidently, such a way can exploit the class semantics during optimization. Moreover, we propose to remove the ℓ_2 normalization in the traditional FCL and directly use the ℓ_1-normalized probability for contrastive learning. Our proposed PCL is simple and effective. We conduct extensive experiments on three close-set image classification tasks, i.e., unsupervised domain adaptation, semi-supervised learning, and semi-supervised domain adaptation. The results on multiple datasets demonstrate that our PCL can consistently get considerable gains and achieves the state-of-the-art performance for all three tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset