Rayleigh-Gauss-Newton optimization with enhanced sampling for variational Monte Carlo

by   Robert J. Webber, et al.

Variational Monte Carlo (VMC) is an approach for computing ground-state wavefunctions that has recently become more powerful due to the introduction of neural network-based wavefunction parametrizations. However, efficiently training neural wavefunctions to converge to an energy minimum remains a difficult problem. In this work, we analyze optimization and sampling methods used in VMC and introduce alterations to improve their performance. First, based on theoretical convergence analysis in a noiseless setting, we motivate a new optimizer that we call the Rayleigh-Gauss-Newton method, which can improve upon gradient descent and natural gradient descent to achieve superlinear convergence with little added computational cost. Second, in order to realize this favorable comparison in the presence of stochastic noise, we analyze the effect of sampling error on VMC parameter updates and experimentally demonstrate that it can be reduced by the parallel tempering method. In particular, we demonstrate that RGN can be made robust to energy spikes that occur when new regions of configuration space become available to the sampler over the course of optimization. Finally, putting theory into practice, we apply our enhanced optimization and sampling methods to the transverse-field Ising and XXZ models on large lattices, yielding ground-state energy estimates with remarkably high accuracy after just 200-500 parameter updates.


page 1

page 2

page 3

page 4


Quasi-Newton Quasi-Monte Carlo for variational Bayes

Many machine learning problems optimize an objective that must be measur...

A simple geometric method for navigating the energy landscape of centroidal Voronoi tessellations

Finding optimal centroidal Voronoi tessellations (CVTs) of a 2D domain p...

Convergence of stochastic gradient descent on parameterized sphere with applications to variational Monte Carlo simulation

We analyze stochastic gradient descent (SGD) type algorithms on a high-d...

Neural network quantum state with proximal optimization: a ground-state searching scheme based on variational Monte Carlo

Neural network quantum states (NQS), incorporating with variational Mont...

Training neural networks using Metropolis Monte Carlo and an adaptive variant

We examine the zero-temperature Metropolis Monte Carlo algorithm as a to...

Provable Convergence of Variational Monte Carlo Methods

The Variational Monte Carlo (VMC) is a promising approach for computing ...

A stochastic Stein Variational Newton method

Stein variational gradient descent (SVGD) is a general-purpose optimizat...

Please sign up or login with your details

Forgot password? Click here to reset