Toroidal AutoEncoder

by   Maciej Mikulski, et al.

Enforcing distributions of latent variables in neural networks is an active subject. It is vital in all kinds of generative models, where we want to be able to interpolate between points in the latent space, or sample from it. Modern generative AutoEncoders (AE) like WAE, SWAE, CWAE add a regularizer to the standard (deterministic) AE, which allows to enforce Gaussian distribution in the latent space. Enforcing different distributions, especially topologically nontrivial, might bring some new interesting possibilities, but this subject seems unexplored so far. This article proposes a new approach to enforce uniform distribution on d-dimensional torus. We introduce a circular spring loss, which enforces minibatch points to be equally spaced and satisfy cyclic boundary conditions. As example of application we propose multiple-path morphing. Minimal distance geodesic between two points in uniform distribution on latent space of angles becomes a line, however, torus topology allows us to choose such lines in alternative ways, going through different edges of [-π,π]^d. Further applications to explore can be for example trying to learn real-life topologically nontrivial spaces of features, like rotations to automatically recognize 2D rotation of an object in picture by training on relative angles, or even 3D rotations by additionally using spherical features - this way morphing should be close to object rotation.


Adversarially Approximated Autoencoder for Image Generation and Manipulation

Regularized autoencoders learn the latent codes, a structure with the re...

Score-Based Multimodal Autoencoders

Multimodal Variational Autoencoders (VAEs) represent a promising group o...

Pulling back information geometry

Latent space geometry has shown itself to provide a rich and rigorous fr...

GENs: Generative Encoding Networks

Mapping data from and/or onto a known family of distributions has become...

Deep Automodulators

We introduce a novel autoencoder model that deviates from traditional au...

Relative representations enable zero-shot latent space communication

Neural networks embed the geometric structure of a data manifold lying i...

Please sign up or login with your details

Forgot password? Click here to reset