Contrastive Principal Component Learning: Modeling Similarity by Augmentation Overlap

06/01/2022
by   Lu Han, et al.
0

Traditional self-supervised contrastive learning methods learn embeddings by pulling views of the same sample together and pushing views of different samples away. Since views of a sample are usually generated via data augmentations, the semantic relationship between samples is ignored. Based on the observation that semantically similar samples are more likely to have similar augmentations, we propose to measure similarity via the distribution of augmentations, i.e., how much the augmentations of two samples overlap. To handle the dimensional and computational complexity, we propose a novel Contrastive Principal Component Learning (CPCL) method composed of a contrastive-like loss and an on-the-fly projection loss to efficiently perform PCA on the augmentation feature, which encodes the augmentation distribution. By CPCL, the learned low-dimensional embeddings theoretically preserve the similarity of augmentation distribution between samples. Empirical results show our method can achieve competitive results against various traditional contrastive learning methods on different benchmarks.

READ FULL TEXT
research
03/25/2022

Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap

Recently, contrastive learning has risen to be a promising approach for ...
research
05/15/2023

Learning Better Contrastive View from Radiologist's Gaze

Recent self-supervised contrastive learning methods greatly benefit from...
research
01/12/2023

Self-Supervised Image-to-Point Distillation via Semantically Tolerant Contrastive Loss

An effective framework for learning 3D representations for perception ta...
research
01/29/2022

Deep Contrastive Learning is Provably (almost) Principal Component Analysis

We show that Contrastive Learning (CL) under a family of loss functions ...
research
09/21/2021

AutoGCL: Automated Graph Contrastive Learning via Learnable View Generators

Contrastive learning has been widely applied to graph representation lea...
research
09/28/2022

Graph Soft-Contrastive Learning via Neighborhood Ranking

Graph contrastive learning (GCL) has been an emerging solution for graph...
research
08/31/2021

ScatSimCLR: self-supervised contrastive learning with pretext task regularization for small-scale datasets

In this paper, we consider a problem of self-supervised learning for sma...

Please sign up or login with your details

Forgot password? Click here to reset