On the Approximation Accuracy of Gaussian Variational Inference
The main quantities of interest in Bayesian inference are arguably the first two moments of the posterior distribution. In the past decades, variational inference (VI) has emerged as a tractable approach to approximate these summary statistics, and a viable alternative to the more established paradigm of Markov Chain Monte Carlo. However, little is known about the approximation accuracy of VI. In this work, we bound the mean and covariance approximation error of Gaussian VI in terms of dimension and sample size. Our results indicate that Gaussian VI outperforms significantly the classical Gaussian approximation obtained from the ubiquitous Laplace method. Our error analysis relies on a Hermite series expansion of the log posterior whose first terms are precisely cancelled out by the first order optimality conditions associated to the Gaussian VI optimization problem.
READ FULL TEXT