Gromov-Wasserstein Autoencoders

09/15/2022
by   Nao Nakagawa, et al.
2

Learning concise data representations without supervisory signals is a fundamental challenge in machine learning. A prominent approach to this goal is likelihood-based models such as variational autoencoders (VAE) to learn latent representations based on a meta-prior, which is a general premise assumed beneficial for downstream tasks (e.g., disentanglement). However, such approaches often deviate from the original likelihood architecture to apply the introduced meta-prior, causing undesirable changes in their training. In this paper, we propose a novel representation learning method, Gromov-Wasserstein Autoencoders (GWAE), which directly matches the latent and data distributions. Instead of a likelihood-based objective, GWAE models have a trainable prior optimized by minimizing the Gromov-Wasserstein (GW) metric. The GW metric measures the distance structure-oriented discrepancy between distributions supported on incomparable spaces, e.g., with different dimensionalities. By restricting the family of the trainable prior, we can introduce meta-priors to control latent representations for downstream tasks. The empirical comparison with the existing VAE-based methods shows that GWAE models can learn representations based on meta-priors by changing the prior family without further modifying the GW objective.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro