Bounding Evidence and Estimating Log-Likelihood in VAE

by   Łukasz Struski, et al.
Jagiellonian University

Many crucial problems in deep learning and statistics are caused by a variational gap, i.e., a difference between evidence and evidence lower bound (ELBO). As a consequence, in the classical VAE model, we obtain only the lower bound on the log-likelihood since ELBO is used as a cost function, and therefore we cannot compare log-likelihood between models. In this paper, we present a general and effective upper bound of the variational gap, which allows us to efficiently estimate the true evidence. We provide an extensive theoretical study of the proposed approach. Moreover, we show that by applying our estimation, we can easily obtain lower and upper bounds for the log-likelihood of VAE models.


page 7

page 8


Tutorial: Deriving the Standard Variational Autoencoder (VAE) Loss Function

In Bayesian machine learning, the posterior distribution is typically co...

Embrace the Gap: VAEs Perform Independent Mechanism Analysis

Variational autoencoders (VAEs) are a popular framework for modeling com...

Parameter Learning for Log-supermodular Distributions

We consider log-supermodular models on binary variables, which are proba...

Lipschitz Parametrization of Probabilistic Graphical Models

We show that the log-likelihood of several probabilistic graphical model...

Accurate and Conservative Estimates of MRF Log-likelihood using Reverse Annealing

Markov random fields (MRFs) are difficult to evaluate as generative mode...

Super-resolution Variational Auto-Encoders

The framework of variational autoencoders (VAEs) provides a principled m...

Bounds all around: training energy-based models with bidirectional bounds

Energy-based models (EBMs) provide an elegant framework for density esti...

Please sign up or login with your details

Forgot password? Click here to reset