Chaos of Learning Beyond Zero-sum and Coordination via Game Decompositions

by   Yun Kuen Cheung, et al.

Machine learning processes, e.g. ”learning in games”, can be viewed as non-linear dynamical systems. In general, such systems exhibit a wide spectrum of behaviors, ranging from stability/recurrence to the undesirable phenomena of chaos (or ”butterfly effect”). Chaos captures sensitivity of round-off errors and can severely affect predictability and reproducibility of ML systems, but AI/ML community's understanding of it remains rudimentary. It has a lot out there that await exploration. Recently, Cheung and Piliouras employed volume-expansion argument to show that Lyapunov chaos occurs in the cumulative payoff space, when some popular learning algorithms, including Multiplicative Weights Update (MWU), Follow-the-Regularized-Leader (FTRL) and Optimistic MWU (OMWU), are used in several subspaces of games, e.g. zero-sum, coordination or graphical constant-sum games. It is natural to ask: can these results generalize to much broader families of games? We take on a game decomposition approach and answer the question affirmatively. Among other results, we propose a notion of ”matrix domination” and design a linear program, and use them to characterize bimatrix games where MWU is Lyapunov chaotic almost everywhere. Such family of games has positive Lebesgue measure in the bimatrix game space, indicating that chaos is a substantial issue of learning in games. For multi-player games, we present a local equivalence of volume change between general games and graphical games, which is used to perform volume and chaos analyses of MWU and OMWU in potential games.


page 1

page 2

page 3

page 4


Chaos, Extremism and Optimism: Volume Analysis of Learning in Games

We present volume analyses of Multiplicative Weights Updates (MWU) and O...

Last Round Convergence and No-Instant Regret in Repeated Games with Asymmetric Information

This paper considers repeated games in which one player has more informa...

BL-WoLF: A Framework For Loss-Bounded Learnability In Zero-Sum Games

We present BL-WoLF, a framework for learnability in repeated zero-sum ga...

Smooth markets: A basic mechanism for organizing gradient-based learners

With the success of modern machine learning, it is becoming increasingly...

Matrix Multiplicative Weights Updates in Quantum Zero-Sum Games: Conservation Laws Recurrence

Recent advances in quantum computing and in particular, the introduction...

Consensus Multiplicative Weights Update: Learning to Learn using Projector-based Game Signatures

Recently, Optimistic Multiplicative Weights Update (OMWU) was proven to ...

Separable games

We introduce the notion of separable games, which refines and generalize...

Please sign up or login with your details

Forgot password? Click here to reset