Neural Random Projection: From the Initial Task To the Input Similarity Problem

by   Alan Savushkin, et al.

In this paper, we propose a novel approach for implicit data representation to evaluate similarity of input data using a trained neural network. In contrast to the previous approach, which uses gradients for representation, we utilize only the outputs from the last hidden layer of a neural network and do not use a backward step. The proposed technique explicitly takes into account the initial task and significantly reduces the size of the vector representation, as well as the computation time. The key point is minimization of information loss between layers. Generally, a neural network discards information that is not related to the problem, which makes the last hidden layer representation useless for input similarity task. In this work, we consider two main causes of information loss: correlation between neurons and insufficient size of the last hidden layer. To reduce the correlation between neurons we use orthogonal weight initialization for each layer and modify the loss function to ensure orthogonality of the weights during training. Moreover, we show that activation functions can potentially increase correlation. To solve this problem, we apply modified Batch-Normalization with Dropout. Using orthogonal weight matrices allow us to consider such neural networks as an application of the Random Projection method and get a lower bound estimate for the size of the last hidden layer. We perform experiments on MNIST and physical examination datasets. In both experiments, initially, we split a set of labels into two disjoint subsets to train a neural network for binary classification problem, and then use this model to measure similarity between input data and define hidden classes. Our experimental results show that the proposed approach achieves competitive results on the input similarity task while reducing both computation time and the size of the input representation.


page 1

page 2

page 3

page 4


Estimating Multiplicative Relations in Neural Networks

Universal approximation theorem suggests that a shallow neural network c...

An Effective and Efficient Training Algorithm for Multi-layer Feedforward Neural Networks

Network initialization is the first and critical step for training neura...

Within-layer Diversity Reduces Generalization Gap

Neural networks are composed of multiple layers arranged in a hierarchic...

Universal Hysteresis Identification Using Extended Preisach Neural Network

Hysteresis phenomena have been observed in different branches of physics...

Early Neuron Alignment in Two-layer ReLU Networks with Small Initialization

This paper studies the problem of training a two-layer ReLU network for ...

Neuronal Correlation: a Central Concept in Neural Network

This paper proposes to study neural networks through neuronal correlatio...

Binary autoencoder with random binary weights

Here is presented an analysis of an autoencoder with binary activations ...

Please sign up or login with your details

Forgot password? Click here to reset