RetinotopicNet: An Iterative Attention Mechanism Using Local Descriptors with Global Context

05/12/2020
by   Thomas Kurbiel, et al.
5

Convolutional Neural Networks (CNNs) were the driving force behind many advancements in Computer Vision research in recent years. This progress has spawned many practical applications and we see an increased need to efficiently move CNNs to embedded systems today. However traditional CNNs lack the property of scale and rotation invariance: two of the most frequently encountered transformations in natural images. As a consequence CNNs have to learn different features for same objects at different scales. This redundancy is the main reason why CNNs need to be very deep in order to achieve the desired accuracy. In this paper we develop an efficient solution by reproducing how nature has solved the problem in the human brain. To this end we let our CNN operate on small patches extracted using the log-polar transform, which is known to be scale and rotation equivariant. Patches extracted in this way have the nice property of magnifying the central field and compressing the periphery. Hence we obtain local descriptors with global context information. However the processing of a single patch is usually not sufficient to achieve high accuracies in e.g. classification tasks. We therefore successively jump to several different locations, called saccades, thus building an understanding of the whole image. Since log-polar patches contain global context information, we can efficiently calculate following saccades using only the small patches. Saccades efficiently compensate for the lack of translation equivariance of the log-polar transform.

READ FULL TEXT

page 3

page 4

page 5

page 6

research
07/21/2020

CyCNN: A Rotation Invariant CNN using Polar Mapping and Cylindrical Convolution Layers

Deep Convolutional Neural Networks (CNNs) are empirically known to be in...
research
11/04/2019

Human eye inspired log-polar pre-processing for neural networks

In this paper we draw inspiration from the human visual system, and pres...
research
08/15/2019

Beyond Cartesian Representations for Local Descriptors

The dominant approach for learning local patch descriptors relies on sma...
research
03/02/2021

Contextually Guided Convolutional Neural Networks for Learning Most Transferable Representations

Deep Convolutional Neural Networks (CNNs), trained extensively on very l...
research
09/06/2017

Polar Transformer Networks

Convolutional neural networks (CNNs) are inherently equivariant to trans...
research
03/03/2020

What's the relationship between CNNs and communication systems?

The interpretability of Convolutional Neural Networks (CNNs) is an impor...

Please sign up or login with your details

Forgot password? Click here to reset