Fast-MoCo: Boost Momentum-based Contrastive Learning with Combinatorial Patches

07/17/2022
by   Yuanzheng Ci, et al.
8

Contrastive-based self-supervised learning methods achieved great success in recent years. However, self-supervision requires extremely long training epochs (e.g., 800 epochs for MoCo v3) to achieve promising results, which is unacceptable for the general academic community and hinders the development of this topic. This work revisits the momentum-based contrastive learning frameworks and identifies the inefficiency in which two augmented views generate only one positive pair. We propose Fast-MoCo - a novel framework that utilizes combinatorial patches to construct multiple positive pairs from two augmented views, which provides abundant supervision signals that bring significant acceleration with neglectable extra computational cost. Fast-MoCo trained with 100 epochs achieves 73.5 MoCo v3 (ResNet-50 backbone) trained with 800 epochs. Extra training (200 epochs) further improves the result to 75.1 state-of-the-art methods. Experiments on several downstream tasks also confirm the effectiveness of Fast-MoCo.

READ FULL TEXT
research
01/11/2023

GraVIS: Grouping Augmented Views from Independent Sources for Dermatology Analysis

Self-supervised representation learning has been extremely successful in...
research
02/10/2022

Energy-Based Contrastive Learning of Visual Representations

Contrastive learning is a method of learning visual representations by t...
research
02/07/2022

Crafting Better Contrastive Views for Siamese Representation Learning

Recent self-supervised contrastive learning methods greatly benefit from...
research
05/09/2023

MSVQ: Self-Supervised Learning with Multiple Sample Views and Queues

Self-supervised methods based on contrastive learning have achieved grea...
research
06/07/2021

Socially-Aware Self-Supervised Tri-Training for Recommendation

Self-supervised learning (SSL), which can automatically generate ground-...
research
11/17/2022

Self-Supervised Visual Representation Learning via Residual Momentum

Self-supervised learning (SSL) approaches have shown promising capabilit...
research
03/31/2022

Self-distillation Augmented Masked Autoencoders for Histopathological Image Classification

Self-supervised learning (SSL) has drawn increasing attention in patholo...

Please sign up or login with your details

Forgot password? Click here to reset