MAST: Masked Augmentation Subspace Training for Generalizable Self-Supervised Priors

03/07/2023
by   Chen Huang, et al.
0

Recent Self-Supervised Learning (SSL) methods are able to learn feature representations that are invariant to different data augmentations, which can then be transferred to downstream tasks of interest. However, different downstream tasks require different invariances for their best performance, so the optimal choice of augmentations for SSL depends on the target task. In this paper, we aim to learn self-supervised features that generalize well across a variety of downstream tasks (e.g., object classification, detection and instance segmentation) without knowing any task information beforehand. We do so by Masked Augmentation Subspace Training (or MAST) to encode in the single feature space the priors from different data augmentations in a factorized way. Specifically, we disentangle the feature space into separate subspaces, each induced by a learnable mask that selects relevant feature dimensions to model invariance to a specific augmentation. We show the success of MAST in jointly capturing generalizable priors from different augmentations, using both unique and shared features across the subspaces. We further show that MAST benefits from uncertainty modeling to reweight ambiguous samples from strong augmentations that may cause similarity mismatch in each subspace. Experiments demonstrate that MAST consistently improves generalization on various downstream tasks, while being task-agnostic and efficient during SSL. We also provide interesting insights about how different augmentations are related and how uncertainty reflects learning difficulty.

READ FULL TEXT
research
05/29/2023

MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations

Contrastive self-supervised learning has gained attention for its abilit...
research
11/22/2021

Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks

Self-supervised learning is a powerful paradigm for representation learn...
research
10/26/2021

Understanding the Role of Self-Supervised Learning in Out-of-Distribution Detection Task

Self-supervised learning (SSL) has achieved great success in a variety o...
research
02/06/2023

The SSL Interplay: Augmentations, Inductive Bias, and Generalization

Self-supervised learning (SSL) has emerged as a powerful framework to le...
research
08/03/2023

Get the Best of Both Worlds: Improving Accuracy and Transferability by Grassmann Class Representation

We generalize the class vectors found in neural networks to linear subsp...
research
01/10/2022

Reproducing BowNet: Learning Representations by Predicting Bags of Visual Words

This work aims to reproduce results from the CVPR 2020 paper by Gidaris ...
research
11/02/2022

EquiMod: An Equivariance Module to Improve Self-Supervised Learning

Self-supervised visual representation methods are closing the gap with s...

Please sign up or login with your details

Forgot password? Click here to reset