Self-Supervised Learning via Maximum Entropy Coding

10/20/2022
by   Xin Liu, et al.
0

A mainstream type of current self-supervised learning methods pursues a general-purpose representation that can be well transferred to downstream tasks, typically by optimizing on a given pretext task such as instance discrimination. In this work, we argue that existing pretext tasks inevitably introduce biases into the learned representation, which in turn leads to biased transfer performance on various downstream tasks. To cope with this issue, we propose Maximum Entropy Coding (MEC), a more principled objective that explicitly optimizes on the structure of the representation, so that the learned representation is less biased and thus generalizes better to unseen downstream tasks. Inspired by the principle of maximum entropy in information theory, we hypothesize that a generalizable representation should be the one that admits the maximum entropy among all plausible representations. To make the objective end-to-end trainable, we propose to leverage the minimal coding length in lossy data coding as a computationally tractable surrogate for the entropy, and further derive a scalable reformulation of the objective that allows fast computation. Extensive experiments demonstrate that MEC learns a more generalizable representation than previous methods based on specific pretext tasks. It achieves state-of-the-art performance consistently on various downstream tasks, including not only ImageNet linear probe, but also semi-supervised classification, object detection, instance segmentation, and object tracking. Interestingly, we show that existing batch-wise and feature-wise self-supervised objectives could be seen equivalent to low-order approximations of MEC. Code and pre-trained models are available at https://github.com/xinliu20/MEC.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/23/2022

Distilling Knowledge from Self-Supervised Teacher by Embedding Graph Alignment

Recent advances have indicated the strengths of self-supervised pre-trai...
research
12/07/2021

Unsupervised Representation Learning via Neural Activation Coding

We present neural activation coding (NAC) as a novel approach for learni...
research
03/31/2023

INoD: Injected Noise Discriminator for Self-Supervised Representation Learning in Agricultural Fields

Perception datasets for agriculture are limited both in quantity and div...
research
05/31/2023

Representation Reliability and Its Impact on Downstream Tasks

Self-supervised pre-trained models extract general-purpose representatio...
research
08/19/2021

Concurrent Discrimination and Alignment for Self-Supervised Feature Learning

Existing self-supervised learning methods learn representation by means ...
research
11/18/2021

Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Recent unsupervised representation learning methods have shown to be eff...
research
03/27/2019

Self-Supervised Learning via Conditional Motion Propagation

Intelligent agent naturally learns from motion. Various self-supervised ...

Please sign up or login with your details

Forgot password? Click here to reset