Natural Gradient Hybrid Variational Inference with Application to Deep Mixed Models

02/27/2023
by   Weiben Zhang, et al.
0

Stochastic models with global parameters θ and latent variables z are common, and variational inference (VI) is popular for their estimation. This paper uses a variational approximation (VA) that comprises a Gaussian with factor covariance matrix for the marginal of θ, and the exact conditional posterior of z|θ. Stochastic optimization for learning the VA only requires generation of z from its conditional posterior, while θ is updated using the natural gradient, producing a hybrid VI method. We show that this is a well-defined natural gradient optimization algorithm for the joint posterior of (z,θ). Fast to compute expressions for the Tikhonov damped Fisher information matrix required to compute a stable natural gradient update are derived. We use the approach to estimate probabilistic Bayesian neural networks with random output layer coefficients to allow for heterogeneity. Simulations show that using the natural gradient is more efficient than using the ordinary gradient, and that the approach is faster and more accurate than two leading benchmark natural gradient VI methods. In a financial application we show that accounting for industry level heterogeneity using the deep model improves the accuracy of probabilistic prediction of asset pricing models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset