Non-asymptotic convergence bounds for modified tamed unadjusted Langevin algorithm in non-convex setting

07/06/2022
by   Ariel Neufeld, et al.
0

We consider the problem of sampling from a high-dimensional target distribution π_β on ℝ^d with density proportional to θ↦ e^-β U(θ) using explicit numerical schemes based on discretising the Langevin stochastic differential equation (SDE). In recent literature, taming has been proposed and studied as a method for ensuring stability of Langevin-based numerical schemes in the case of super-linearly growing drift coefficients for the Langevin SDE. In particular, the Tamed Unadjusted Langevin Algorithm (TULA) was proposed in [Bro+19] to sample from such target distributions with the gradient of the potential U being super-linearly growing. However, theoretical guarantees in Wasserstein distances for Langevin-based algorithms have traditionally been derived assuming strong convexity of the potential U. In this paper, we propose a novel taming factor and derive, under a setting with possibly non-convex potential U and super-linearly growing gradient of U, non-asymptotic theoretical bounds in Wasserstein-1 and Wasserstein-2 distances between the law of our algorithm, which we name the modified Tamed Unadjusted Langevin Algorithm (mTULA), and the target distribution π_β. We obtain respective rates of convergence 𝒪(λ) and 𝒪(λ^1/2) in Wasserstein-1 and Wasserstein-2 distances for the discretisation error of mTULA in step size λ. High-dimensional numerical simulations which support our theoretical findings are presented to showcase the applicability of our algorithm.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset