Faithful Model Inversion Substantially Improves Auto-encoding Variational Inference

12/01/2017
by   Stefan Webb, et al.
0

In learning deep generative models, the encoder for variational inference is typically formed in an ad hoc manner with a structure and parametrization analogous to the forward model. Our chief insight is that this results in coarse approximations to the posterior, and that the d-separation properties of the BN structure of the forward model should be used, in a principled way, to produce ones that are faithful to the posterior, for which we introduce the novel Compact Minimal I-map (CoMI) algorithm. Applying our method to common models reveals that standard encoder design choices lack many important edges, and through experiments we demonstrate that modelling these edges is important for optimal learning. We show how using a faithful encoder is crucial when modelling with continuous relaxations of categorical distributions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset