Information Theoretic Learning with Infinitely Divisible Kernels

01/16/2013
by   Luis G. Sanchez Giraldo, et al.
0

In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's axiomatic definition of entropy and examine some key properties of this functional that lead to the concept of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbert spaces. As an application example, we derive a supervised metric learning algorithm using a matrix based analogue to conditional entropy achieving results comparable with the state of the art.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset