XOC: Explainable Observer-Classifier for Explainable Binary Decisions

02/05/2019
by   Stephan Alaniz, et al.
0

When deep neural networks optimize highly complex functions, it is not always obvious how they reach the final decision. Providing explanations would make this decision process more transparent and improve a user's trust towards the machine as they help develop a better understanding of the rationale behind the network's predictions. Here, we present an explainable observer-classifier framework that exposes the steps taken through the model's decision-making process. Instead of assigning a label to an image in a single step, our model makes iterative binary sub-decisions, which reveal a decision tree as a thought process. In addition, our model allows to hierarchically cluster the data and give each binary decision a semantic meaning. The sequence of binary decisions learned by our model imitates human-annotated attributes. On six benchmark datasets with increasing size and granularity, our model outperforms the decision-tree baseline and generates easy-to-understand binary decision sequences explaining the network's predictions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset