Deep Epitome for Unravelling Generalized Hamming Network: A Fuzzy Logic Interpretation of Deep Learning

11/15/2017
by   Lixin Fan, et al.
0

This paper gives a rigorous analysis of trained Generalized Hamming Networks(GHN) proposed by Fan (2017) and discloses an interesting finding about GHNs, i.e., stacked convolution layers in a GHN is equivalent to a single yet wide convolution layer. The revealed equivalence, on the theoretical side, can be regarded as a constructive manifestation of the universal approximation theorem Cybenko(1989); Hornik (1991). In practice, it has profound and multi-fold implications. For network visualization, the constructed deep epitomes at each layer provide a visualization of network internal representation that does not rely on the input data. Moreover, deep epitomes allows the direct extraction of features in just one step, without resorting to regularized optimizations used in existing visualization tools.

READ FULL TEXT

page 16

page 17

page 18

page 19

page 20

page 21

page 22

page 24

research
10/27/2017

Revisit Fuzzy Neural Network: Demystifying Batch Normalization and ReLU with Generalized Hamming Network

We revisit fuzzy neural network with a cornerstone notion of generalized...
research
05/07/2019

Two classes of linear codes and their generalized Hamming weights

The generalized Hamming weights (GHWs) are fundamental parameters of lin...
research
04/10/2017

Unsupervised prototype learning in an associative-memory network

Unsupervised learning in a generalized Hopfield associative-memory netwo...
research
07/03/2018

A note on the generalized Hamming weights of Reed-Muller codes

In this note, we give a very simple description of the generalized Hammi...
research
09/14/2023

Is Solving Graph Neural Tangent Kernel Equivalent to Training Graph Neural Network?

A rising trend in theoretical deep learning is to understand why deep le...
research
08/24/2021

Adaptive and Interpretable Graph Convolution Networks Using Generalized Pagerank

We investigate adaptive layer-wise graph convolution in deep GCN models....
research
10/01/2019

NESTA: Hamming Weight Compression-Based Neural Proc. Engine

In this paper, we present NESTA, a specialized Neural engine that signif...

Please sign up or login with your details

Forgot password? Click here to reset