Robust Q-learning Algorithm for Markov Decision Processes under Wasserstein Uncertainty

09/30/2022
by   Ariel Neufeld, et al.
0

We present a novel Q-learning algorithm to solve distributionally robust Markov decision problems, where the corresponding ambiguity set of transition probabilities for the underlying Markov decision process is a Wasserstein ball around a (possibly estimated) reference measure. We prove convergence of the presented algorithm and provide several examples also using real data to illustrate both the tractability of our algorithm as well as the benefits of considering distributional robustness when solving stochastic optimal control problems, in particular when the estimated distributions turn out to be misspecified in practice.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset