Mathematical Analysis on Out-of-Sample Extensions

by   Jianzhong Wang, et al.

Let X=X∪Z be a data set in R^D, where X is the training set and Z is the test one. Many unsupervised learning algorithms based on kernel methods have been developed to provide dimensionality reduction (DR) embedding for a given training set Φ: X→R^d ( d≪ D) that maps the high-dimensional data X to its low-dimensional feature representation Y=Φ(X). However, these algorithms do not straightforwardly produce DR of the test set Z. An out-of-sample extension method provides DR of Z using an extension of the existent embedding Φ, instead of re-computing the DR embedding for the whole set X. Among various out-of-sample DR extension methods, those based on Nyström approximation are very attractive. Many papers have developed such out-of-extension algorithms and shown their validity by numerical experiments. However, the mathematical theory for the DR extension still need further consideration. Utilizing the reproducing kernel Hilbert space (RKHS) theory, this paper develops a preliminary mathematical analysis on the out-of-sample DR extension operators. It treats an out-of-sample DR extension operator as an extension of the identity on the RKHS defined on X. Then the Nyström-type DR extension turns out to be an orthogonal projection. In the paper, we also present the conditions for the exact DR extension and give the estimate for the error of the extension.


page 1

page 2

page 3

page 4


Visual Cluster Separation Using High-Dimensional Sharpened Dimensionality Reduction

Applying dimensionality reduction (DR) to large, high-dimensional data s...

Measuring group-separability in geometrical space for evaluation of pattern recognition and embedding algorithms

Evaluating data separation in a geometrical space is fundamental for pat...

Perplexity-free Parametric t-SNE

The t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm is a u...

Towards a Mathematical Foundation of Immunology and Amino Acid Chains

We attempt to set a mathematical foundation of immunology and amino acid...

Communication-efficient k-Means for Edge-based Machine Learning

We consider the problem of computing the k-means centers for a large hig...

A perturbation based out-of-sample extension framework

Out-of-sample extension is an important task in various kernel based non...

Taming Wild High Dimensional Text Data with a Fuzzy Lash

The bag of words (BOW) represents a corpus in a matrix whose elements ar...

Please sign up or login with your details

Forgot password? Click here to reset