The Wang-Landau Algorithm as Stochastic Optimization and its Acceleration

07/27/2019
by   Chenguang Dai, et al.
0

We show that the Wang-Landau algorithm can be formulated as a stochastic gradient descent algorithm minimizing a smooth and convex objective function, of which the gradient is estimated using Markov Chain Monte Carlo iterations. The optimization formulation provides a new perspective for improving the efficiency of the Wang-Landau algorithm using optimization tools. We propose one possible improvement, based on the momentum method and the adaptive learning rate idea, and demonstrate it on a two-dimensional Ising model and a two-dimensional ten-state Potts model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset