Doubly Robust Policy Evaluation and Optimization

by   Miroslav Dudík, et al.

We study sequential decision making in environments where rewards are only partially observed, but can be modeled as a function of observed contexts and the chosen action by the decision maker. This setting, known as contextual bandits, encompasses a wide variety of applications such as health care, content recommendation and Internet advertising. A central task is evaluation of a new policy given historic data consisting of contexts, actions and received rewards. The key challenge is that the past data typically does not faithfully represent proportions of actions taken by a new policy. Previous approaches rely either on models of rewards or models of the past policy. The former are plagued by a large bias whereas the latter have a large variance. In this work, we leverage the strengths and overcome the weaknesses of the two approaches by applying the doubly robust estimation technique to the problems of policy evaluation and optimization. We prove that this approach yields accurate value estimates when we have either a good (but not necessarily consistent) model of rewards or a good (but not necessarily consistent) model of past policy. Extensive empirical comparison demonstrates that the doubly robust estimation uniformly improves over existing techniques, achieving both lower variance in value estimation and better policies. As such, we expect the doubly robust approach to become common practice in policy evaluation and optimization.


page 1

page 2

page 3

page 4


Doubly Robust Policy Evaluation and Learning

We study decision making in environments where the reward is only partia...

Imitation-Regularized Offline Learning

We study the problem of offline learning in automated decision systems u...

Constructing Effective Personalized Policies Using Counterfactual Inference from Biased Data Sets with Many Features

This paper proposes a novel approach for constructing effective personal...

Off-Policy Evaluation in Embedded Spaces

Off-policy evaluation methods are important in recommendation systems an...

Sample-efficient Nonstationary Policy Evaluation for Contextual Bandits

We present and prove properties of a new offline policy evaluator for an...

Balanced Policy Evaluation and Learning

We present a new approach to the problems of evaluating and learning per...

Quality vs. Quantity of Data in Contextual Decision-Making: Exact Analysis under Newsvendor Loss

When building datasets, one needs to invest time, money and energy to ei...

Please sign up or login with your details

Forgot password? Click here to reset