Preference Exploration for Efficient Bayesian Optimization with Multiple Outcomes

03/21/2022
by   Zhiyuan Jerry Lin, et al.
0

We consider Bayesian optimization of expensive-to-evaluate experiments that generate vector-valued outcomes over which a decision-maker (DM) has preferences. These preferences are encoded by a utility function that is not known in closed form but can be estimated by asking the DM to express preferences over pairs of outcome vectors. To address this problem, we develop Bayesian optimization with preference exploration, a novel framework that alternates between interactive real-time preference learning with the DM via pairwise comparisons between outcomes, and Bayesian optimization with a learned compositional model of DM utility and outcomes. Within this framework, we propose preference exploration strategies specifically designed for this task, and demonstrate their performance via extensive simulation studies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2019

Bayesian Optimization with Uncertain Preferences over Attributes

We consider black-box global optimization of time-consuming-to-evaluate ...
research
03/24/2021

On Sequential Bayesian Optimization with Pairwise Comparison

In this work, we study the problem of user preference learning on the ex...
research
06/10/2019

Sampling Humans for Optimizing Preferences in Coloring Artwork

Many circumstances of practical importance have performance or success m...
research
10/20/2018

Hybrid-MST: A Hybrid Active Sampling Strategy for Pairwise Preference Aggregation

In this paper we present a hybrid active sampling strategy for pairwise ...
research
02/20/2020

Computational Design with Crowds

Computational design is aimed at supporting or automating design process...
research
04/12/2017

Preferential Bayesian Optimization

Bayesian optimization (BO) has emerged during the last few years as an e...

Please sign up or login with your details

Forgot password? Click here to reset