Neural Networks are Decision Trees
In this manuscript, we show that any neural network having piece-wise linear activation functions can be represented as a decision tree. The representation is equivalence and not an approximation, thus keeping the accuracy of the neural network exactly as is. This equivalence shows that neural networks are indeed interpretable by design and makes the black-box understanding obsolete. We share equivalent trees of some neural networks and show that besides providing interpretability, tree representation can also achieve some computational advantages. The analysis holds both for fully connected and convolutional networks, which may or may not also include skip connections and/or normalizations.
READ FULL TEXT