Minimax Regret for Stochastic Shortest Path

03/24/2021
by   Alon Cohen, et al.
15

We study the Stochastic Shortest Path (SSP) problem in which an agent has to reach a goal state in minimum total expected cost. In the learning formulation of the problem, the agent has no prior knowledge about the costs and dynamics of the model. She repeatedly interacts with the model for K episodes, and has to learn to approximate the optimal policy as closely as possible. In this work we show that the minimax regret for this setting is O(B_⋆√(|S| |A| K)) where B_⋆ is a bound on the expected cost of the optimal policy from any state, S is the state space, and A is the action space. This matches the lower bound of Rosenberg et al. (2020) up to logarithmic factors, and improves their regret bound by a factor of √(|S|). Our algorithm runs in polynomial-time per episode, and is based on a novel reduction to reinforcement learning in finite-horizon MDPs. To that end, we provide an algorithm for the finite-horizon setting whose leading term in the regret depends only logarithmically on the horizon, yielding the same regret guarantees for SSP.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset