Understanding the Importance of Single Directions via Representative Substitution

11/27/2018
by   Li Chen, et al.
8

Understanding the internal representations of deep neural networks (DNNs) is crucial for explaining their behaviors. The interpretation of individual units which are neurons in MLP or convolution kernels in convolutional network have been payed much attentions since they act as a fundamental role. However, recent research (Morcos et al. 2018) presented a counterintuitive phenomenon which suggested an individual unit with high class selectivity, called interpretable units, had poor contributions to generalization of DNNs. In this work, we provide a new perspective to understand this counterintuitive phenomenon which we argue actually it makes sense when we introduce Representative Substitution (RS). Instead of individually selective units with classes, the RS refers to independence of an unit's representations in the same layer without any annotation. Our experiments demonstrate that interpretable units have low RS which are not important to the network's generalization. The RS provides new insights to the interpretation of DNNs and suggests that we need focus on the independence and relationship of the representations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset