Efficient Implementation of Second-Order Stochastic Approximation Algorithms in High-Dimensional Problems

06/23/2019
by   Jingyi Zhu, et al.
0

Stochastic approximation (SA) algorithms have been widely applied in minimization problems where the loss functions and/or the gradient are only accessible through noisy evaluations. Among all the SA algorithms, the second-order simultaneous perturbation stochastic approximation (2SPSA) and the second-order stochastic gradient (2SG) are particularly efficient in high-dimensional problems covering both gradient-free and gradient-based scenarios. However, due to the necessary matrix operations, the per-iteration floating-point-operation cost of the original 2SPSA/2SG is O(p^3) with p being the dimension of the underlying parameter. Note that the O(p^3) floating-point-operation cost is distinct from the classical SPSA-based per-iteration O(1) cost in terms of the number of noisy function evaluations. In this work, we propose a technique to efficiently implement the 2SPSA/2SG algorithms via the symmetric indefinite matrix factorization and show that the per-iteration floating-point-operation cost is reduced from O(p^3) to O(p^2) . The almost sure convergence and rate of convergence for the newly-proposed scheme are inherited from the original 2SPSA/2SG naturally. The numerical improvement manifests its superiority in numerical studies in terms of computational complexity and numerical stability.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset