Finite-Sample Analysis of Off-Policy Natural Actor-Critic Algorithm

02/18/2021
by   Sajad Khodadadian, et al.
0

In this paper, we provide finite-sample convergence guarantees for an off-policy variant of the natural actor-critic (NAC) algorithm based on Importance Sampling. In particular, we show that the algorithm converges to a global optimal policy with a sample complexity of 𝒪(ϵ^-3log^2(1/ϵ)) under an appropriate choice of stepsizes. In order to overcome the issue of large variance due to Importance Sampling, we propose the Q-trace algorithm for the critic, which is inspired by the V-trace algorithm (Espeholt et al., 2018). This enables us to explicitly control the bias and variance, and characterize the trade-off between them. As an advantage of off-policy sampling, a major feature of our result is that we do not need any additional assumptions, beyond the ergodicity of the Markov chain induced by the behavior policy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset