Can Reinforcement Learning Find Stackelberg-Nash Equilibria in General-Sum Markov Games with Myopic Followers?

by   Han Zhong, et al.

We study multi-player general-sum Markov games with one of the players designated as the leader and the other players regarded as followers. In particular, we focus on the class of games where the followers are myopic, i.e., they aim to maximize their instantaneous rewards. For such a game, our goal is to find a Stackelberg-Nash equilibrium (SNE), which is a policy pair (π^*, ν^*) such that (i) π^* is the optimal policy for the leader when the followers always play their best response, and (ii) ν^* is the best response policy of the followers, which is a Nash equilibrium of the followers' game induced by π^*. We develop sample-efficient reinforcement learning (RL) algorithms for solving for an SNE in both online and offline settings. Our algorithms are optimistic and pessimistic variants of least-squares value iteration, and they are readily able to incorporate function approximation tools in the setting of large state spaces. Furthermore, for the case with linear function approximation, we prove that our algorithms achieve sublinear regret and suboptimality under online and offline setups respectively. To the best of our knowledge, we establish the first provably efficient RL algorithms for solving for SNEs in general-sum Markov games with myopic followers.


page 1

page 2

page 3

page 4


Learning Zero-Sum Simultaneous-Move Markov Games Using Function Approximation and Correlated Equilibrium

We develop provably efficient reinforcement learning algorithms for two-...

Actions Speak What You Want: Provably Sample-Efficient Reinforcement Learning of the Quantal Stackelberg Equilibrium from Strategic Feedbacks

We study reinforcement learning (RL) for learning a Quantal Stackelberg ...

Model-free Reinforcement Learning for Stochastic Stackelberg Security Games

In this paper, we consider a sequential stochastic Stackelberg game with...

Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games

Real world applications such as economics and policy making often involv...

Offline Learning in Markov Games with General Function Approximation

We study offline multi-agent reinforcement learning (RL) in Markov games...

The Power of Exploiter: Provable Multi-Agent RL in Large State Spaces

Modern reinforcement learning (RL) commonly engages practical problems w...

Performance Analysis of Trial and Error Algorithms

Model-free decentralized optimizations and learning are receiving increa...

Please sign up or login with your details

Forgot password? Click here to reset