AdaGeo: Adaptive Geometric Learning for Optimization and Sampling

02/05/2020
by   Gabriele Abbati, et al.
0

Gradient-based optimization and Markov Chain Monte Carlo sampling can be found at the heart of several machine learning methods. In high-dimensional settings, well-known issues such as slow-mixing, non-convexity and correlations can hinder the algorithms’ efficiency. In order to overcome these difficulties, we propose AdaGeo, a preconditioning framework for adaptively learning the geometry of the parameter space during optimization or sampling. In particular, we use the Gaussian process latent variable model (GP-LVM) to represent a lower-dimensional embedding of the parameters, identifying the underlying Riemannian manifold on which the optimization or sampling is taking place. Samples or optimization steps are consequently proposed based on the geometry of the manifold. We apply our framework to stochastic gradient descent, stochastic gradient Langevin dynamics, and stochastic gradient Riemannian Langevin dynamics, and show performance improvements for both optimization and sampling.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/20/2020

Stochastic Gradient Langevin Dynamics Algorithms with Adaptive Drifts

Bayesian deep learning offers a principled way to address many issues co...
research
07/27/2019

The Wang-Landau Algorithm as Stochastic Optimization and its Acceleration

We show that the Wang-Landau algorithm can be formulated as a stochastic...
research
11/06/2017

Adaptive Bayesian Sampling with Monte Carlo EM

We present a novel technique for learning the mass matrices in samplers ...
research
11/02/2019

Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo

As an important Markov Chain Monte Carlo (MCMC) method, stochastic gradi...
research
04/09/2019

On the Adaptivity of Stochastic Gradient-Based Optimization

Stochastic-gradient-based optimization has been a core enabling methodol...
research
06/06/2017

Deep Latent Dirichlet Allocation with Topic-Layer-Adaptive Stochastic Gradient Riemannian MCMC

It is challenging to develop stochastic gradient based scalable inferenc...
research
02/01/2023

Riemannian Stochastic Approximation for Minimizing Tame Nonsmooth Objective Functions

In many learning applications, the parameters in a model are structurall...

Please sign up or login with your details

Forgot password? Click here to reset