Convex Representation Learning for Generalized Invariance in Semi-Inner-Product Space

04/25/2020
by   Yingyi Ma, et al.
0

Invariance (defined in a general sense) has been one of the most effective priors for representation learning. Direct factorization of parametric models is feasible only for a small range of invariances, while regularization approaches, despite improved generality, lead to nonconvex optimization. In this work, we develop a convex representation learning algorithm for a variety of generalized invariances that can be modeled as semi-norms. It is much more efficient than Haar kernels and distributionally robust methods. Our approach is based on Euclidean embeddings of kernel representers in a semi-inner-product space, and experimental results confirm its effectiveness in learning invariant representations and making accurate predictions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset