Feature-Based Q-Learning for Two-Player Stochastic Games
Consider a two-player zero-sum stochastic game where the transition function can be embedded in a given feature space. We propose a two-player Q-learning algorithm for approximating the Nash equilibrium strategy via sampling. The algorithm is shown to find an ϵ-optimal strategy using sample size linear to the number of features. To further improve its sample efficiency, we develop an accelerated algorithm by adopting techniques such as variance reduction, monotonicity preservation and two-sided strategy approximation. We prove that the algorithm is guaranteed to find an ϵ-optimal strategy using no more than Õ(K/(ϵ^2(1-γ)^4)) samples with high probability, where K is the number of features and γ is a discount factor. The sample, time and space complexities of the algorithm are independent of original dimensions of the game.
READ FULL TEXT