From dense to sparse design: Optimal rates under the supremum norm for estimating the mean function in functional data analysis

by   Max Berger, et al.

In the setting of functional data analysis, we derive optimal rates of convergence in the supremum norm for estimating the Hölder-smooth mean function of a stochastic processes which is repeatedly and discretely observed at fixed, multivariate, synchronous design points and with additional errors. Similarly to the rates in L_2 obtained in Cai and Yuan (2011), for sparse design a discretization term dominates, while in the dense case the √(n) rate can be achieved as if the n processes were continuously observed without errors. However, our analysis differs in several respects from Cai and Yuan (2011). First, we do not assume that the paths of the processes are as smooth as the mean, but still obtain the √(n) rate of convergence without additional logarithmic factors in the dense setting. Second, we show that in the supremum norm, there is an intermediate regime between the sparse and dense cases dominated by the contribution of the observation errors. Third, and in contrast to the analysis in L_2, interpolation estimators turn out to be sub-optimal in L_∞ in the dense setting, which explains their poor empirical performance. We also obtain a central limit theorem in the supremum norm and discuss the selection of the bandwidth. Simulations and real data applications illustrate the results.


page 11

page 30


Functional Data Analysis with Rough Sampled Paths?

Functional data are typically modeled as sampled paths of smooth stochas...

Sobolev Norm Learning Rates for Regularized Least-Squares Algorithm

Learning rates for regularized least-squares algorithms are in most case...

Robust optimal estimation of the mean function from discretely sampled functional data

Estimating the mean of functional data is a central problem in functiona...

A functional central limit theorem for the empirical Ripley's K-function

We establish a functional central limit theorem for Ripley's K-function ...

Fast rates for empirical risk minimization over càdlàg functions with bounded sectional variation norm

Empirical risk minimization over classes functions that are bounded for ...

Data Dependent Convergence for Distributed Stochastic Optimization

In this dissertation we propose alternative analysis of distributed stoc...

L_2-norm sampling discretization and recovery of functions from RKHS with finite trace

We provide a spectral norm concentration inequality for infinite random ...

Please sign up or login with your details

Forgot password? Click here to reset