Variational Diffusion Auto-encoder: Deep Latent Variable Model with Unconditional Diffusion Prior

04/24/2023
by   Georgios Batzolis, et al.
0

Variational auto-encoders (VAEs) are one of the most popular approaches to deep generative modeling. Despite their success, images generated by VAEs are known to suffer from blurriness, due to a highly unrealistic modeling assumption that the conditional data distribution p(x | z) can be approximated as an isotropic Gaussian. In this work we introduce a principled approach to modeling the conditional data distribution p(x | z) by incorporating a diffusion model. We show that it is possible to create a VAE-like deep latent variable model without making the Gaussian assumption on p(x | z) or even training a decoder network. A trained encoder and an unconditional diffusion model can be combined via Bayes' rule for score functions to obtain an expressive model for p(x | z). Our approach avoids making strong assumptions on the parametric form of p(x | z), and thus allows to significantly improve the performance of VAEs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro