Near-Optimal Reinforcement Learning with Self-Play

06/22/2020
by   Yu Bai, et al.
0

This paper considers the problem of designing optimal algorithms for reinforcement learning in two-player zero-sum games. We focus on self-play algorithms which learn the optimal policy by playing against itself without any direct supervision. In a tabular episodic Markov game with S states, A max-player actions and B min-player actions, the best existing algorithm for finding an approximate Nash equilibrium requires 𝒪̃(S^2AB) steps of game playing, when only highlighting the dependency on (S,A,B). In contrast, the best existing lower bound scales as Ω(S(A+B)) and has a significant gap from the upper bound. This paper closes this gap for the first time: we propose an optimistic variant of the Nash Q-learning algorithm with sample complexity 𝒪̃(SAB), and a new Nash V-learning algorithm with sample complexity 𝒪̃(S(A+B)). The latter result matches the information-theoretic lower bound in all problem-dependent parameters except for a polynomial factor of the length of each episode. Towards understanding learning objectives in Markov games other than finding the Nash equilibrium, we present a computational hardness result for learning the best responses against a fixed opponent. This also implies the computational hardness for achieving sublinear regret when playing against adversarial opponents.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset