Rates of Convergence for Regression with the Graph Poly-Laplacian

09/06/2022
by   Nicolas Garcia Trillos, et al.
0

In the (special) smoothing spline problem one considers a variational problem with a quadratic data fidelity penalty and Laplacian regularisation. Higher order regularity can be obtained via replacing the Laplacian regulariser with a poly-Laplacian regulariser. The methodology is readily adapted to graphs and here we consider graph poly-Laplacian regularisation in a fully supervised, non-parametric, noise corrupted, regression problem. In particular, given a dataset {x_i}_i=1^n and a set of noisy labels {y_i}_i=1^n⊂ℝ we let u_n:{x_i}_i=1^n→ℝ be the minimiser of an energy which consists of a data fidelity term and an appropriately scaled graph poly-Laplacian term. When y_i = g(x_i)+ξ_i, for iid noise ξ_i, and using the geometric random graph, we identify (with high probability) the rate of convergence of u_n to g in the large data limit n→∞. Furthermore, our rate, up to logarithms, coincides with the known rate of convergence in the usual smoothing spline model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset