Optimal importance sampling for overdamped Langevin dynamics
Calculating averages with respect to multimodal probability distributions is often necessary in applications. Markov chain Monte Carlo (MCMC) methods to this end, which are based on time averages along a realization of a Markov process ergodic with respect to the target probability distribution, are usually plagued by a large variance due to the metastability of the process. In this work, we mathematically analyze an importance sampling approach for MCMC methods that rely on the overdamped Langevin dynamics. Specifically, we study an estimator based on an ergodic average along a realization of an overdamped Langevin process for a modified potential. The estimator we consider incorporates a reweighting term in order to rectify the bias that would otherwise be introduced by this modification of the potential. We obtain an explicit expression in dimension 1 for the biasing potential that minimizes the asymptotic variance of the estimator for a given observable, and propose a general numerical approach for approximating the optimal potential in the multi-dimensional setting. We also investigate an alternative approach where, instead of the asymptotic variance for a given observable, a weighted average of the asymptotic variances corresponding to a class of observables is minimized. Finally, we demonstrate the capabilities of the proposed method by means of numerical experiments.
READ FULL TEXT