Unsupervised Object Representation Learning using Translation and Rotation Group Equivariant VAE

10/24/2022
by   Alireza Nasiri, et al.
0

In many imaging modalities, objects of interest can occur in a variety of locations and poses (i.e. are subject to translations and rotations in 2d or 3d), but the location and pose of an object does not change its semantics (i.e. the object's essence). That is, the specific location and rotation of an airplane in satellite imagery, or the 3d rotation of a chair in a natural image, or the rotation of a particle in a cryo-electron micrograph, do not change the intrinsic nature of those objects. Here, we consider the problem of learning semantic representations of objects that are invariant to pose and location in a fully unsupervised manner. We address shortcomings in previous approaches to this problem by introducing TARGET-VAE, a translation and rotation group-equivariant variational autoencoder framework. TARGET-VAE combines three core innovations: 1) a rotation and translation group-equivariant encoder architecture, 2) a structurally disentangled distribution over latent rotation, translation, and a rotation-translation-invariant semantic object representation, which are jointly inferred by the approximate inference network, and 3) a spatially equivariant generator network. In comprehensive experiments, we show that TARGET-VAE learns disentangled representations without supervision that significantly improve upon, and avoid the pathologies of, previous methods. When trained on images highly corrupted by rotation and translation, the semantic representations learned by TARGET-VAE are similar to those learned on consistently posed objects, dramatically improving clustering in the semantic latent space. Furthermore, TARGET-VAE is able to perform remarkably accurate unsupervised pose and location inference. We expect methods like TARGET-VAE will underpin future approaches for unsupervised object generation, pose prediction, and object detection.

READ FULL TEXT

page 9

page 10

page 13

page 15

page 16

page 17

research
09/25/2019

Explicitly disentangling image content from translation and rotation with spatial-VAE

Given an image dataset, we are often interested in finding data generati...
research
04/27/2023

Rotation and Translation Invariant Representation Learning with Implicit Neural Representations

In many computer vision applications, images are acquired with arbitrary...
research
12/24/2014

Transformation Properties of Learned Visual Representations

When a three-dimensional object moves relative to an observer, a change ...
research
05/22/2019

PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking

Tracking 6D poses of objects from videos provides rich information to a ...
research
03/21/2019

Learning Disentangled Representations of Satellite Image Time Series

In this paper, we investigate how to learn a suitable representation of ...
research
04/09/2018

Binge Watching: Scaling Affordance Learning from Sitcoms

In recent years, there has been a renewed interest in jointly modeling p...
research
04/12/2018

CubeNet: Equivariance to 3D Rotation and Translation

3D Convolutional Neural Networks are sensitive to transformations applie...

Please sign up or login with your details

Forgot password? Click here to reset