Topology of deep neural networks

04/13/2020
by   Gregory Naitzat, et al.
0

We study how the topology of a data set M = M_a ∪ M_b ⊆R^d, representing two classes a and b in a binary classification problem, changes as it passes through the layers of a well-trained neural network, i.e., with perfect accuracy on training set and near-zero generalization error (≈ 0.01%). The goal is to shed light on two mysteries in deep neural networks: (i) a nonsmooth activation function like ReLU outperforms a smooth one like hyperbolic tangent; (ii) successful neural network architectures rely on having many layers, even though a shallow network can approximate any function arbitrary well. We performed extensive experiments on the persistent homology of a wide range of point cloud data sets, both real and simulated. The results consistently demonstrate the following: (1) Neural networks operate by changing topology, transforming a topologically complicated data set into a topologically simple one as it passes through the layers. No matter how complicated the topology of M we begin with, when passed through a well-trained neural network f : R^d →R^p, there is a vast reduction in the Betti numbers of both components M_a and M_b; in fact they nearly always reduce to their lowest possible values: β_k(f(M_i)) = 0 for k > 1 and β_0(f(M_i)) = 1, i =a, b. Furthermore, (2) the reduction in Betti numbers is significantly faster for ReLU activation than hyperbolic tangent activation as the former defines nonhomeomorphic maps that change topology, whereas the latter defines homeomorphic maps that preserve topology. Lastly, (3) shallow and deep networks transform data sets differently – a shallow network operates mainly through changing geometry and changes topology only in its final layers, a deep one spreads topological changes more evenly across all layers.

READ FULL TEXT

page 15

page 19

page 27

research
09/03/2021

Using Topological Framework for the Design of Activation Function and Model Pruning in Deep Neural Networks

Success of deep neural networks in diverse tasks across domains of compu...
research
08/07/2023

Expediting Neural Network Verification via Network Reduction

A wide range of verification methods have been proposed to verify the sa...
research
06/06/2023

Deep neural networks architectures from the perspective of manifold learning

Despite significant advances in the field of deep learning in ap-plicati...
research
05/25/2023

Data Topology-Dependent Upper Bounds of Neural Network Widths

This paper investigates the relationship between the universal approxima...
research
06/22/2022

Concentration inequalities and optimal number of layers for stochastic deep neural networks

We state concentration and martingale inequalities for the output of the...
research
11/15/2020

hyper-sinh: An Accurate and Reliable Function from Shallow to Deep Learning in TensorFlow and Keras

This paper presents the 'hyper-sinh', a variation of the m-arcsinh activ...
research
02/19/2021

Principled Simplicial Neural Networks for Trajectory Prediction

We consider the construction of neural network architectures for data on...

Please sign up or login with your details

Forgot password? Click here to reset