Revisiting the Critical Factors of Augmentation-Invariant Representation Learning

07/30/2022
by   Junqiang Huang, et al.
1

We focus on better understanding the critical factors of augmentation-invariant representation learning. We revisit MoCo v2 and BYOL and try to prove the authenticity of the following assumption: different frameworks bring about representations of different characteristics even with the same pretext task. We establish the first benchmark for fair comparisons between MoCo v2 and BYOL, and observe: (i) sophisticated model configurations enable better adaptation to pre-training dataset; (ii) mismatched optimization strategies of pre-training and fine-tuning hinder model from achieving competitive transfer performances. Given the fair benchmark, we make further investigation and find asymmetry of network structure endows contrastive frameworks to work well under the linear evaluation protocol, while may hurt the transfer performances on long-tailed classification tasks. Moreover, negative samples do not make models more sensible to the choice of data augmentations, nor does the asymmetric network structure. We believe our findings provide useful information for future work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2020

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training

Graph representation learning has emerged as a powerful technique for re...
research
12/31/2020

CLEAR: Contrastive Learning for Sentence Representation

Pre-trained language models have proven their unique powers in capturing...
research
10/13/2020

CAPT: Contrastive Pre-Training for LearningDenoised Sequence Representations

Pre-trained self-supervised models such as BERT have achieved striking s...
research
09/11/2023

Examining the Effect of Pre-training on Time Series Classification

Although the pre-training followed by fine-tuning paradigm is used exten...
research
07/21/2020

PointContrast: Unsupervised Pre-training for 3D Point Cloud Understanding

Arguably one of the top success stories of deep learning is transfer lea...
research
11/02/2022

On the Informativeness of Supervision Signals

Learning transferable representations by training a classifier is a well...
research
08/23/2023

Critical Learning Periods Emerge Even in Deep Linear Networks

Critical learning periods are periods early in development where tempora...

Please sign up or login with your details

Forgot password? Click here to reset