Gaussian processes with linear operator inequality constraints

01/10/2019
by   Christian Agrell, et al.
0

This paper presents an approach for constrained Gaussian Process (GP) regression where we assume that a set of linear transformations of the process are bounded. It is motivated by machine learning applications for high-consequence engineering systems, where this kind of information is often made available from phenomenological knowledge, and the resulting constraints may be essential to achieve the level of confidence needed. We consider a GP f over functions on X⊂R^n taking values in R, where the process Lf is still Gaussian when L is a linear operator. Our goal is to model f under the constraint that realizations of Lf are confined to a convex set of functions. In particular we require that a ≤Lf ≤ b given two functions a and b where a < b pointwise. This formulation provides a consistent way of encoding multiple linear constraints, such as shape-constraints based on e.g. boundedness, monotonicity or convexity as a relevant example. We adopt the approach of using a sufficiently dense set of virtual observation locations where the constraint is required to hold, and derive the exact posterior for a conjugate likelihood. The results needed for stable numerical implementation are derived, together with an efficient sampling scheme for estimating the posterior process which is exact in the limit. A few numerical examples focusing on noiseless observations are given. This is relevant for computer code emulation and is also more computationally demanding than the alternative scenario with i.i.d. Gaussian noise.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset