Single Time-scale Actor-critic Method to Solve the Linear Quadratic Regulator with Convergence Guarantees
We propose a single time-scale actor-critic algorithm to solve the linear quadratic regulator (LQR) problem. A least squares temporal difference (LSTD) method is applied to the critic and a natural policy gradient method is used for the actor. We give a proof of convergence with sample complexity (^-1log(^-1)^2). The method in the proof is applicable to general single time-scale bilevel optimization problem. We also numerically validate our theoretical results on the convergence.
READ FULL TEXT