Efficient Approximation of Expected Hypervolume Improvement using Gauss-Hermite Quadrature

by   Alma Rahat, et al.

Many methods for performing multi-objective optimisation of computationally expensive problems have been proposed recently. Typically, a probabilistic surrogate for each objective is constructed from an initial dataset. The surrogates can then be used to produce predictive densities in the objective space for any solution. Using the predictive densities, we can compute the expected hypervolume improvement (EHVI) due to a solution. Maximising the EHVI, we can locate the most promising solution that may be expensively evaluated next. There are closed-form expressions for computing the EHVI, integrating over the multivariate predictive densities. However, they require partitioning the objective space, which can be prohibitively expensive for more than three objectives. Furthermore, there are no closed-form expressions for a problem where the predictive densities are dependent, capturing the correlations between objectives. Monte Carlo approximation is used instead in such cases, which is not cheap. Hence, the need to develop new accurate but cheaper approximation methods remains. Here we investigate an alternative approach toward approximating the EHVI using Gauss-Hermite quadrature. We show that it can be an accurate alternative to Monte Carlo for both independent and correlated predictive densities with statistically significant rank correlations for a range of popular test problems.


User preferences in Bayesian multi-objective optimization: the expected weighted hypervolume improvement criterion

In this article, we present a framework for taking into account user pre...

Closed-form modeling of neuronal spike train statistics using multivariate Hawkes cumulants

We derive exact analytical expressions for the cumulants of any orders o...

Scalable computation of predictive probabilities in probit models with Gaussian process priors

Predictive models for binary data are fundamental in various fields, and...

Variance Analysis for Monte Carlo Integration: A Representation-Theoretic Perspective

In this report, we revisit the work of Pilleboue et al. [2015], providin...

PAO: A general particle swarm algorithm with exact dynamics and closed-form transition densities

A great deal of research has been conducted in the consideration of meta...

A survey of Monte Carlo methods for noisy and costly densities with application to reinforcement learning

This survey gives an overview of Monte Carlo methodologies using surroga...

Variational Inference for the Smoothing Distribution in Dynamic Probit Models

Recently, Fasano, Rebaudo, Durante and Petrone (2019) provided closed-fo...

Please sign up or login with your details

Forgot password? Click here to reset