Convergence Rate of the (1+1)-Evolution Strategy with Success-Based Step-Size Adaptation on Convex Quadratic Functions

03/02/2021
by   Daiki Morinaga, et al.
0

The (1+1)-evolution strategy (ES) with success-based step-size adaptation is analyzed on a general convex quadratic function and its monotone transformation, that is, f(x) = g((x - x^*)^T H (x - x^*)), where g:ℝ→ℝ is a strictly increasing function, H is a positive-definite symmetric matrix, and x^* ∈ℝ^d is the optimal solution of f. The convergence rate, that is, the decrease rate of the distance from a search point m_t to the optimal solution x^*, is proven to be in O(exp( - L / Tr(H) )), where L is the smallest eigenvalue of H and Tr(H) is the trace of H. This result generalizes the known rate of O(exp(- 1/d )) for the case of H = I_d (I_d is the identity matrix of dimension d) and O(exp(- 1/ (d·ξ) )) for the case of H = diag(ξ· I_d/2, I_d/2). To the best of our knowledge, this is the first study in which the convergence rate of the (1+1)-ES is derived explicitly and rigorously on a general convex quadratic function, which depicts the impact of the distribution of the eigenvalues in the Hessian H on the optimization and not only the impact of the condition number of H.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro