Path to Stochastic Stability: Comparative Analysis of Stochastic Learning Dynamics in Games

by   Hassan Jaleel, et al.

Stochastic stability is a popular solution concept for stochastic learning dynamics in games. However, a critical limitation of this solution concept is its inability to distinguish between different learning rules that lead to the same steady-state behavior. We address this limitation for the first time and develop a framework for the comparative analysis of stochastic learning dynamics with different update rules but same steady-state behavior. We present the framework in the context of two learning dynamics: Log-Linear Learning (LLL) and Metropolis Learning (ML). Although both of these dynamics have the same stochastically stable states, LLL and ML correspond to different behavioral models for decision making. Moreover, we demonstrate through an example setup of sensor coverage game that for each of these dynamics, the paths to stochastically stable states exhibit distinctive behaviors. Therefore, we propose multiple criteria to analyze and quantify the differences in the short and medium run behavior of stochastic learning dynamics. We derive and compare upper bounds on the expected hitting time to the set of Nash equilibria for both LLL and ML. For the medium to long-run behavior, we identify a set of tools from the theory of perturbed Markov chains that result in a hierarchical decomposition of the state space into collections of states called cycles. We compare LLL and ML based on the proposed criteria and develop invaluable insights into the comparative behavior of the two dynamics.


page 1

page 2

page 3

page 4


Aspiration-based Perturbed Learning Automata

This paper introduces a novel payoff-based learning scheme for distribut...

Bifurcation analysis of waning-boosting epidemiological models with repeat infections and varying immunity periods

We consider the SIRWJS epidemiological model that includes the waning an...

On stochastic imitation dynamics in large-scale networks

We consider a broad class of stochastic imitation dynamics over networks...

Memory Asymmetry Creates Heteroclinic Orbits to Nash Equilibrium in Learning in Zero-Sum Games

Learning in games considers how multiple agents maximize their own rewar...

A Coupling Approach to Analyzing Games with Dynamic Environments

The theory of learning in games has extensively studied situations where...

Weak Monotone Comparative Statics

We develop a theory of monotone comparative statics based on weak set or...

Robust Stochastic Bayesian Games for Behavior Space Coverage

A key challenge in multi-agent systems is the design of intelligent agen...

Please sign up or login with your details

Forgot password? Click here to reset