Sampling from Log-Concave Distributions with Infinity-Distance Guarantees and Applications to Differentially Private Optimization

11/07/2021
by   Oren Mangoubi, et al.
0

For a d-dimensional log-concave distribution π(θ)∝ e^-f(θ) on a polytope K, we consider the problem of outputting samples from a distribution ν which is O(ε)-close in infinity-distance sup_θ∈ K|logν(θ)/π(θ)| to π. Such samplers with infinity-distance guarantees are specifically desired for differentially private optimization as traditional sampling algorithms which come with total-variation distance or KL divergence bounds are insufficient to guarantee differential privacy. Our main result is an algorithm that outputs a point from a distribution O(ε)-close to π in infinity-distance and requires O((md+dL^2R^2)×(LR+dlog(Rd+LRd/ε r))× md^ω-1) arithmetic operations, where f is L-Lipschitz, K is defined by m inequalities, is contained in a ball of radius R and contains a ball of smaller radius r, and ω is the matrix-multiplication constant. In particular this runtime is logarithmic in 1/ε and significantly improves on prior works. Technically, we depart from the prior works that construct Markov chains on a 1/ε^2-discretization of K to achieve a sample with O(ε) infinity-distance error, and present a method to convert continuous samples from K with total-variation bounds to samples with infinity bounds. To achieve improved dependence on d, we present a "soft-threshold" version of the Dikin walk which may be of independent interest. Plugging our algorithm into the framework of the exponential mechanism yields similar improvements in the running time of ε-pure differentially private algorithms for optimization problems such as empirical risk minimization of Lipschitz-convex functions and low-rank approximation, while still achieving the tightest known utility bounds.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset