Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning

05/20/2016
by   Alexander N. Gorban, et al.
0

Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L_1 norm or even sub-linear potentials corresponding to quasinorms L_p (0<p<1). The back side of these approaches is increase in computational cost for optimization. Till so far, no approaches have been suggested to deal with arbitrary error functionals, in a flexible and computationally efficient framework. In this paper, we develop a theory and basic universal data approximation algorithms (k-means, principal components, principal manifolds and graphs, regularized and sparse regression), based on piece-wise quadratic error potentials of subquadratic growth (PQSQ potentials). We develop a new and universal framework to minimize arbitrary sub-quadratic error potentials using an algorithm with guaranteed fast convergence to the local or global error minimum. The theory of PQSQ potentials is based on the notion of the cone of minorant functions, and represents a natural approximation formalism based on the application of min-plus algebra. The approach can be applied in most of existing machine learning methods, including methods of data approximation and regularized and sparse regression, leading to the improvement in the computational cost/accuracy trade-off. We demonstrate that on synthetic and real-life datasets PQSQ-based machine learning methods achieve orders of magnitude faster computational performance than the corresponding state-of-the-art methods.

READ FULL TEXT
research
04/10/2022

Optimal Subsampling for Large Sample Ridge Regression

Subsampling is a popular approach to alleviating the computational burde...
research
11/16/2016

Localized Coulomb Descriptors for the Gaussian Approximation Potential

We introduce a novel class of localized atomic environment representatio...
research
08/21/2019

A tree-based radial basis function method for noisy parallel surrogate optimization

Parallel surrogate optimization algorithms have proven to be efficient m...
research
08/03/2020

On the Approximation of Local Expansions of Laplace Potentials by the Fast Multipole Method

In this paper, we present a generalization of the classical error bounds...
research
11/15/2022

Machine Learning Methods Applied to Cortico-Cortical Evoked Potentials Aid in Localizing Seizure Onset Zones

Epilepsy affects millions of people, reducing quality of life and increa...
research
04/01/2019

Fast, accurate, and transferable many-body interatomic potentials by symbolic regression

The length and time scales of atomistic simulations are limited by the c...
research
10/13/2021

Stiffness optimisation of graded microstructal configurations using asymptotic analysis and machine learning

The article is aimed to address a combinative use of asymptotic analysis...

Please sign up or login with your details

Forgot password? Click here to reset