Achieving Better Regret against Strategic Adversaries

02/13/2023
by   Le Cong Dinh, et al.
0

We study online learning problems in which the learner has extra knowledge about the adversary's behaviour, i.e., in game-theoretic settings where opponents typically follow some no-external regret learning algorithms. Under this assumption, we propose two new online learning algorithms, Accurate Follow the Regularized Leader (AFTRL) and Prod-Best Response (Prod-BR), that intensively exploit this extra knowledge while maintaining the no-regret property in the worst-case scenario of having inaccurate extra information. Specifically, AFTRL achieves O(1) external regret or O(1) forward regret against no-external regret adversary in comparison with O(√(T)) dynamic regret of Prod-BR. To the best of our knowledge, our algorithm is the first to consider forward regret that achieves O(1) regret against strategic adversaries. When playing zero-sum games with Accurate Multiplicative Weights Update (AMWU), a special case of AFTRL, we achieve last round convergence to the Nash Equilibrium. We also provide numerical experiments to further support our theoretical results. In particular, we demonstrate that our methods achieve significantly better regret bounds and rate of last round convergence, compared to the state of the art (e.g., Multiplicative Weights Update (MWU) and its optimistic counterpart, OMWU).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset