Efficient displacement convex optimization with particle gradient descent

02/09/2023
by   Hadi Daneshmand, et al.
0

Particle gradient descent, which uses particles to represent a probability measure and performs gradient descent on particles in parallel, is widely used to optimize functions of probability measures. This paper considers particle gradient descent with a finite number of particles and establishes its theoretical guarantees to optimize functions that are displacement convex in measures. Concretely, for Lipschitz displacement convex functions defined on probability over ℝ^d, we prove that O(1/ϵ^2) particles and O(d/ϵ^4) computations are sufficient to find the ϵ-optimal solutions. We further provide improved complexity bounds for optimizing smooth displacement convex functions. We demonstrate the application of our results for function approximation with specific neural architectures with two-dimensional inputs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset