Bayesian Regularization on Function Spaces via Q-Exponential Process

by   Shiwei Lan, et al.

Regularization is one of the most important topics in optimization, statistics and machine learning. To get sparsity in estimating a parameter u∈^d, an ℓ_q penalty term, ‖ u‖_q, is usually added to the objective function. What is the probabilistic distribution corresponding to such ℓ_q penalty? What is the correct stochastic process corresponding to ‖ u‖_q when we model functions u∈ L^q? This is important for statistically modeling large dimensional objects, e.g. images, with penalty to preserve certainty properties, e.g. edges in the image. In this work, we generalize the q-exponential distribution (with density proportional to) exp(- |u|^q) to a stochastic process named Q-exponential (Q-EP) process that corresponds to the L_q regularization of functions. The key step is to specify consistent multivariate q-exponential distributions by choosing from a large family of elliptic contour distributions. The work is closely related to Besov process which is usually defined by the expanded series. Q-EP can be regarded as a definition of Besov process with explicit probabilistic formulation and direct control on the correlation length. From the Bayesian perspective, Q-EP provides a flexible prior on functions with sharper penalty (q<2) than the commonly used Gaussian process (GP). We compare GP, Besov and Q-EP in modeling time series and reconstructing images and demonstrate the advantage of the proposed methodology.


page 2

page 8


A Similarity Measure of Gaussian Process Predictive Distributions

Some scenarios require the computation of a predictive distribution of a...

Relaxed Gaussian process interpolation: a goal-oriented approach to Bayesian optimization

This work presents a new procedure for obtaining predictive distribution...

GP-ConvCNP: Better Generalization for Convolutional Conditional Neural Processes on Time Series Data

Neural Processes (NPs) are a family of conditional generative models tha...

Kinetic Energy Plus Penalty Functions for Sparse Estimation

In this paper we propose and study a family of sparsity-inducing penalty...

Towards Scalable Gaussian Process Modeling

Numerous engineering problems of interest to the industry are often char...

Enriched Mixtures of Gaussian Process Experts

Mixtures of experts probabilistically divide the input space into region...

Interpreting a Penalty as the Influence of a Bayesian Prior

In machine learning, it is common to optimize the parameters of a probab...

Please sign up or login with your details

Forgot password? Click here to reset