Near Sample-Optimal Reduction-based Policy Learning for Average Reward MDP

12/01/2022
by   Jinghan Wang, et al.
0

This work considers the sample complexity of obtaining an ε-optimal policy in an average reward Markov Decision Process (AMDP), given access to a generative model (simulator). When the ground-truth MDP is weakly communicating, we prove an upper bound of O(H ε^-3ln1/δ) samples per state-action pair, where H := sp(h^*) is the span of bias of any optimal policy, ε is the accuracy and δ is the failure probability. This bound improves the best-known mixing-time-based approaches in [Jin Sidford 2021], which assume the mixing-time of every deterministic policy is bounded. The core of our analysis is a proper reduction bound from AMDP problems to discounted MDP (DMDP) problems, which may be of independent interests since it allows the application of DMDP algorithms for AMDP in other settings. We complement our upper bound by proving a minimax lower bound of Ω(|𝒮| |𝒜| H ε^-2ln1/δ) total samples, showing that a linear dependent on H is necessary and that our upper bound matches the lower bound in all parameters of (|𝒮|, |𝒜|, H, ln1/δ) up to some logarithmic factors.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset