Learning Explicit Deep Representations from Deep Kernel Networks

by   Mingyuan Jiu, et al.

Deep kernel learning aims at designing nonlinear combinations of multiple standard elementary kernels by training deep networks. This scheme has proven to be effective, but intractable when handling large-scale datasets especially when the depth of the trained networks increases; indeed, the complexity of evaluating these networks scales quadratically w.r.t. the size of training data and linearly w.r.t. the depth of the trained networks. In this paper, we address the issue of efficient computation in Deep Kernel Networks (DKNs) by designing effective maps in the underlying Reproducing Kernel Hilbert Spaces. Given a pretrained DKN, our method builds its associated Deep Map Network (DMN) whose inner product approximates the original network while being far more efficient. The design principle of our method is greedy and achieved layer-wise, by finding maps that approximate DKNs at different (input, intermediate and output) layers. This design also considers an extra fine-tuning step based on unsupervised learning, that further enhances the generalization ability of the trained DMNs. When plugged into SVMs, these DMNs turn out to be as accurate as the underlying DKNs while being at least an order of magnitude faster on large-scale datasets, as shown through extensive experiments on the challenging ImageCLEF and COREL5k benchmarks.


Cascaded Coarse-to-Fine Deep Kernel Networks for Efficient Satellite Image Change Detection

Deep networks are nowadays becoming popular in many computer vision and ...

Learning Multiple Levels of Representations with Kernel Machines

We propose a connectionist-inspired kernel machine model with three key ...

End-to-end training of deep kernel map networks for image classification

Deep kernel map networks have shown excellent performances in various cl...

Kernel machines with two layers and multiple kernel learning

In this paper, the framework of kernel machines with two layers is intro...

Learning with Neural Tangent Kernels in Near Input Sparsity Time

The Neural Tangent Kernel (NTK) characterizes the behavior of infinitely...

Kernel-Whitening: Overcome Dataset Bias with Isotropic Sentence Embedding

Dataset bias has attracted increasing attention recently for its detrime...

Hierarchical regularization networks for sparsification based learning on noisy datasets

We propose a hierarchical learning strategy aimed at generating sparse r...

Please sign up or login with your details

Forgot password? Click here to reset