Non-convex weakly smooth Langevin Monte Carlo using regularization

01/16/2021
by   Dao Nguyen, et al.
0

Discretization of continuous-time diffusion processes is a widely recognized method for sampling. However, the canonical Euler Maruyama discretization of the Langevin diffusion process, referred as Langevin Monte Carlo (LMC), studied mostly in the context of smooth (gradient Lipschitz) and strongly log-concave densities, is a considerable hindrance for its deployment in many sciences, including computational statistics and statistical learning. In this paper, we establish several theoretical contributions to the literature on such sampling methods for weakly smooth and non-convex densities. Particularly, we use convexification of nonconvex domain <cit.> in combination with regularization to prove convergence in Kullback-Leibler (KL) divergence with the number of iterations to reach ϵ- neighborhood of a target distribution in only polynomial dependence on the dimension. We relax the conditions of <cit.> and prove convergence guarantees under isoperimetry, degenerated convex, and non strongly convex at infinity.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset