Universally consistent vertex classification for latent positions graphs

12/05/2012
by   Minh Tang, et al.
0

In this work we show that, using the eigen-decomposition of the adjacency matrix, we can consistently estimate feature maps for latent position graphs with positive definite link function κ, provided that the latent positions are i.i.d. from some distribution F. We then consider the exploitation task of vertex classification where the link function κ belongs to the class of universal kernels and class labels are observed for a number of vertices tending to infinity and that the remaining vertices are to be classified. We show that minimization of the empirical φ-risk for some convex surrogate φ of 0-1 loss over a class of linear classifiers with increasing complexities yields a universally consistent classifier, that is, a classification rule with error converging to Bayes optimal for any distribution F.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset