Direct Uncertainty Estimation in Reinforcement Learning

06/06/2013
by   Sergey Rodionov, et al.
0

Optimal probabilistic approach in reinforcement learning is computationally infeasible. Its simplification consisting in neglecting difference between true environment and its model estimated using limited number of observations causes exploration vs exploitation problem. Uncertainty can be expressed in terms of a probability distribution over the space of environment models, and this uncertainty can be propagated to the action-value function via Bellman iterations, which are computationally insufficiently efficient though. We consider possibility of directly measuring uncertainty of the action-value function, and analyze sufficiency of this facilitated approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset