The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels Methods

by   Louis Thiry, et al.

A recent line of work showed that various forms of convolutional kernel methods can be competitive with standard supervised deep convolutional networks on datasets like CIFAR-10, obtaining accuracies in the range of 87-90 being more amenable to theoretical analysis. In this work, we highlight the importance of a data-dependent feature extraction step that is key to the obtain good performance in convolutional kernel methods. This step typically corresponds to a whitened dictionary of patches, and gives rise to a data-driven convolutional kernel methods. We extensively study its effect, demonstrating it is the key ingredient for high performance of these methods. Specifically, we show that one of the simplest instances of such kernel methods, based on a single layer of image patches followed by a linear classifier is already obtaining classification accuracies on CIFAR-10 in the same range as previous more sophisticated convolutional kernel methods. We scale this method to the challenging ImageNet dataset, showing such a simple approach can exceed all existing non-learned representation methods. This is a new baseline for object recognition without representation learning methods, that initiates the investigation of convolutional kernel models on ImageNet. We conduct experiments to analyze the dictionary that we used, our ablations showing they exhibit low-dimensional properties.


On Approximation in Deep Convolutional Networks: a Kernel Perspective

The success of deep convolutional networks on on tasks involving high-di...

SVM and ELM: Who Wins? Object Recognition with Deep Convolutional Features from ImageNet

Deep learning with a convolutional neural network (CNN) has been proved ...

End-to-End Kernel Learning with Supervised Convolutional Kernel Networks

In this paper, we introduce a new image representation based on a multil...

Deep Network classification by Scattering and Homotopy dictionary learning

We introduce a sparse scattering deep convolutional neural network, whic...

Selfie: Self-supervised Pretraining for Image Embedding

We introduce a pretraining technique called Selfie, which stands for SEL...

Copy-move Forgery Detection based on Convolutional Kernel Network

In this paper, a copy-move forgery detection method based on Convolution...

Why Size Matters: Feature Coding as Nystrom Sampling

Recently, the computer vision and machine learning community has been in...

Please sign up or login with your details

Forgot password? Click here to reset