Posterior Sampling for Continuing Environments

11/29/2022
by   Wanqiao Xu, et al.
0

We develop an extension of posterior sampling for reinforcement learning (PSRL) that is suited for a continuing agent-environment interface and integrates naturally into agent designs that scale to complex environments. The approach maintains a statistically plausible model of the environment and follows a policy that maximizes expected γ-discounted return in that model. At each time, with probability 1-γ, the model is replaced by a sample from the posterior distribution over environments. For a suitable schedule of γ, we establish an Õ(τ S √(A T)) bound on the Bayesian regret, where S is the number of environment states, A is the number of actions, and τ denotes the reward averaging time, which is a bound on the duration required to accurately estimate the average reward of any policy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset