Stochastic Gradient Descent in Continuous Time: A Central Limit Theorem

10/11/2017
by   Justin Sirignano, et al.
0

Stochastic gradient descent in continuous time (SGDCT) provides a computationally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance. The SGDCT algorithm follows a (noisy) descent direction along a continuous stream of data. The parameter updates occur in continuous time and satisfy a stochastic differential equation. This paper analyzes the asymptotic convergence rate of the SGDCT algorithm by proving a central limit theorem for strongly convex objective functions and, under slightly stronger conditions, for non-convex objective functions as well. An L^p convergence rate is also proven for the algorithm in the strongly convex case.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset