Learn by Challenging Yourself: Contrastive Visual Representation Learning with Hard Sample Generation

by   Yawen Wu, et al.

Contrastive learning (CL), a self-supervised learning approach, can effectively learn visual representations from unlabeled data. However, CL requires learning on vast quantities of diverse data to achieve good performance, without which the performance of CL will greatly degrade. To tackle this problem, we propose a framework with two approaches to improve the data efficiency of CL training by generating beneficial samples and joint learning. The first approach generates hard samples for the main model. The generator is jointly learned with the main model to dynamically customize hard samples based on the training state of the main model. With the progressively growing knowledge of the main model, the generated samples also become harder to constantly encourage the main model to learn better representations. Besides, a pair of data generators are proposed to generate similar but distinct samples as positive pairs. In joint learning, the hardness of a positive pair is progressively increased by decreasing their similarity. In this way, the main model learns to cluster hard positives by pulling the representations of similar yet distinct samples together, by which the representations of similar samples are well-clustered and better representations can be learned. Comprehensive experiments show superior accuracy and data efficiency of the proposed methods over the state-of-the-art on multiple datasets. For example, about 5 ImageNet-100 and CIFAR-10, and more than 6 are achieved for linear classification. Besides, up to 2x data efficiency for linear classification and up to 5x data efficiency for transfer learning are achieved.


Anatomy-Aware Contrastive Representation Learning for Fetal Ultrasound

Self-supervised contrastive representation learning offers the advantage...

MixCo: Mix-up Contrastive Learning for Visual Representation

Contrastive learning has shown remarkable results in recent self-supervi...

Solving Inefficiency of Self-supervised Representation Learning

Self-supervised learning has attracted great interest due to its tremend...

PointCMP: Contrastive Mask Prediction for Self-supervised Learning on Point Cloud Videos

Self-supervised learning can extract representations of good quality fro...

MixSiam: A Mixture-based Approach to Self-supervised Representation Learning

Recently contrastive learning has shown significant progress in learning...

Improving Contrastive Learning on Visually Homogeneous Mars Rover Images

Contrastive learning has recently demonstrated superior performance to s...

Learning to Imagine: Diversify Memory for Incremental Learning using Unlabeled Data

Deep neural network (DNN) suffers from catastrophic forgetting when lear...

Please sign up or login with your details

Forgot password? Click here to reset