Towards interpreting computer vision based on transformation invariant optimization

06/18/2021
by   Chen Li, et al.
7

Interpreting how does deep neural networks (DNNs) make predictions is a vital field in artificial intelligence, which hinders wide applications of DNNs. Visualization of learned representations helps we humans understand the vision of DNNs. In this work, visualized images that can activate the neural network to the target classes are generated by back-propagation method. Here, rotation and scaling operations are applied to introduce the transformation invariance in the image generating process, which we find a significant improvement on visualization effect. Finally, we show some cases that such method can help us to gain insight into neural networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset