Self-Supervised Learning of Pretext-Invariant Representations

12/04/2019
by   Ishan Misra, et al.
0

The goal of self-supervised learning from images is to construct image representations that are semantically meaningful via pretext tasks that do not require semantic annotations for a large training set of images. Many pretext tasks lead to representations that are covariant with image transformations. We argue that, instead, semantic representations ought to be invariant under such transformations. Specifically, we develop Pretext-Invariant Representation Learning (PIRL, pronounced as "pearl") that learns invariant representations based on pretext tasks. We use PIRL with a commonly used pretext task that involves solving jigsaw puzzles. We find that PIRL substantially improves the semantic quality of the learned image representations. Our approach sets a new state-of-the-art in self-supervised learning from images on several popular benchmarks for self-supervised learning. Despite being unsupervised, PIRL outperforms supervised pre-training in learning image representations for object detection. Altogether, our results demonstrate the potential of self-supervised learning of image representations with good invariance properties.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/08/2023

Self-Supervised Learning for Group Equivariant Neural Networks

This paper proposes a method to construct pretext tasks for self-supervi...
research
01/23/2023

Self-Supervised Image Representation Learning: Transcending Masking with Paired Image Overlay

Self-supervised learning has become a popular approach in recent years f...
research
01/31/2022

Adversarial Masking for Self-Supervised Learning

We propose ADIOS, a masked image model (MIM) framework for self-supervis...
research
12/25/2019

Multiple Pretext-Task for Self-Supervised Learning via Mixing Multiple Image Transformations

Self-supervised learning is one of the most promising approaches to lear...
research
01/16/2021

Self-Supervised Representation Learning from Flow Equivariance

Self-supervised representation learning is able to learn semantically me...
research
02/01/2023

Image-Based Vehicle Classification by Synergizing Features from Supervised and Self-Supervised Learning Paradigms

This paper introduces a novel approach to leverage features learned from...
research
05/03/2019

Scaling and Benchmarking Self-Supervised Visual Representation Learning

Self-supervised learning aims to learn representations from the data its...

Please sign up or login with your details

Forgot password? Click here to reset