Fitting Elephants

03/31/2021
by   Partha P. Mitra, et al.
3

Textbook wisdom advocates for smooth function fits and implies that interpolation of noisy data should lead to poor generalization. A related heuristic is that fitting parameters should be fewer than measurements (Occam's Razor). Surprisingly, contemporary machine learning (ML) approaches, cf. deep nets (DNNs), generalize well despite interpolating noisy data. This may be understood via Statistically Consistent Interpolation (SCI), i.e. data interpolation techniques that generalize optimally for big data. In this article we elucidate SCI using the weighted interpolating nearest neighbors (wiNN) algorithm, which adds singular weight functions to kNN (k-nearest neighbors). This shows that data interpolation can be a valid ML strategy for big data. SCI clarifies the relation between two ways of modeling natural phenomena: the rationalist approach (strong priors) of theoretical physics with few parameters and the empiricist (weak priors) approach of modern ML with more parameters than data. SCI shows that the purely empirical approach can successfully predict. However data interpolation does not provide theoretical insights, and the training data requirements may be prohibitive. Complex animal brains are between these extremes, with many parameters, but modest training data, and with prior structure encoded in species-specific mesoscale circuitry. Thus, modern ML provides a distinct epistemological approach different both from physical theories and animal brains.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset