Convergence rates of vector-valued local polynomial regression

07/13/2021
by   Yariv Aizenbud, et al.
0

Non-parametric estimation of functions as well as their derivatives by means of local-polynomial regression is a subject that was studied in the literature since the late 1970's. Given a set of noisy samples of a 𝒞^k smooth function, we perform a local polynomial fit, and by taking its m-th derivative we obtain an estimate for the m-th function derivative. The known optimal rates of convergence for this problem for a k-times smooth function f:ℝ^d →ℝ are n^-k-m/2k + d. However in modern applications it is often the case that we have to estimate a function operating to ℝ^D, for D ≫ d extremely large. In this work, we prove that these same rates of convergence are also achievable by local-polynomial regression in case of a high dimensional target, given some assumptions on the noise distribution. This result is an extension to Stone's seminal work from 1980 to the regime of high-dimensional target domain. In addition, we unveil a connection between the failure probability ε and the number of samples required to achieve the optimal rates.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset