Optimal rates for the regularized learning algorithms under general source condition

11/07/2016
by   Abhishake Rastogi, et al.
0

We consider the learning algorithms under general source condition with the polynomial decay of the eigenvalues of the integral operator in vector-valued function setting. We discuss the upper convergence rates of Tikhonov regularizer under general source condition corresponding to increasing monotone index function. The convergence issues are studied for general regularization schemes by using the concept of operator monotone index functions in minimax setting. Further we also address the minimum possible error for any learning algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/12/2016

Kernel regression, minimax rates and effective dimensionality: beyond the regular case

We investigate if kernel regularization methods can achieve minimax conv...
research
07/25/2022

Optimal Convergence Rates of Deep Neural Networks in a Classification Setting

We establish optimal convergence rates up to a log-factor for a class of...
research
10/13/2017

Manifold regularization based on Nyström type subsampling

In this paper, we study the Nyström type subsampling for large scale ker...
research
03/02/2015

Unregularized Online Learning Algorithms with General Loss Functions

In this paper, we consider unregularized online learning algorithms in a...
research
04/16/2022

PAC-Bayesian Based Adaptation for Regularized Learning

In this paper, we propose a PAC-Bayesian a posteriori parameter selectio...
research
05/21/2012

Conditional mean embeddings as regressors - supplementary

We demonstrate an equivalence between reproducing kernel Hilbert space (...
research
05/21/2023

Rational approximations of operator monotone and operator convex functions

Operator convex functions defined on the positive half-line play a promi...

Please sign up or login with your details

Forgot password? Click here to reset