Learning multiple regularization parameters for generalized Tikhonov regularization using multiple data sets without true data

12/23/2021
by   Michael J. Byrne, et al.
0

During the inversion of discrete linear systems, noise in data can be amplified and result in meaningless solutions. To combat this effect, characteristics of solutions that are considered desirable are mathematically implemented during inversion, which is a process called regularization. The influence of provided prior information is controlled by non-negative regularization parameter(s). There are a number of methods used to select appropriate regularization parameters, as well as a number of methods used for inversion. In this paper, we consider the unbiased risk estimator, generalized cross validation, and the discrepancy principle as the means of selecting regularization parameters. When multiple data sets describing the same physical phenomena are available, the use of multiple regularization parameters can enhance results. Here we demonstrate that it is possible to learn multiple parameter regularization parameters using regularization parameter estimators that are modified to handle multiple parameters and multiple data. The results demonstrate that these modified methods, which do not require the use of true data for learning regularization parameters, are effective and efficient, and perform comparably to methods based on true data for learning the relevant parameters.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset