DF2: Distribution-Free Decision-Focused Learning

08/11/2023
by   Lingkai Kong, et al.
0

Decision-focused learning (DFL) has recently emerged as a powerful approach for predict-then-optimize problems by customizing a predictive model to a downstream optimization task. However, existing end-to-end DFL methods are hindered by three significant bottlenecks: model mismatch error, sample average approximation error, and gradient approximation error. Model mismatch error stems from the misalignment between the model's parameterized predictive distribution and the true probability distribution. Sample average approximation error arises when using finite samples to approximate the expected optimization objective. Gradient approximation error occurs as DFL relies on the KKT condition for exact gradient computation, while most methods approximate the gradient for backpropagation in non-convex objectives. In this paper, we present DF2 – the first distribution-free decision-focused learning method explicitly designed to address these three bottlenecks. Rather than depending on a task-specific forecaster that requires precise model assumptions, our method directly learns the expected optimization function during training. To efficiently learn the function in a data-driven manner, we devise an attention-based model architecture inspired by the distribution-based parameterization of the expected objective. Our method is, to the best of our knowledge, the first to address all three bottlenecks within a single model. We evaluate DF2 on a synthetic problem, a wind power bidding problem, and a non-convex vaccine distribution problem, demonstrating the effectiveness of DF2.

READ FULL TEXT

page 7

page 15

research
07/11/2023

Score Function Gradient Estimation to Widen the Applicability of Decision-Focused Learning

Many real-world optimization problems contain unknown parameters that mu...
research
11/25/2022

End-to-End Stochastic Optimization with Energy-Based Model

Decision-focused learning (DFL) was recently proposed for stochastic opt...
research
07/30/2023

You Shall not Pass: the Zero-Gradient Problem in Predict and Optimize for Convex Optimization

Predict and optimize is an increasingly popular decision-making paradigm...
research
04/14/2022

Gradient boosting for convex cone predict and optimize problems

Many problems in engineering and statistics involve both predictive fore...
research
02/20/2023

Reinforcement Learning with Function Approximation: From Linear to Nonlinear

Function approximation has been an indispensable component in modern rei...
research
06/09/2012

A Nonparametric Conjugate Prior Distribution for the Maximizing Argument of a Noisy Function

We propose a novel Bayesian approach to solve stochastic optimization pr...

Please sign up or login with your details

Forgot password? Click here to reset