Learning Bottleneck Concepts in Image Classification

04/20/2023
by   Bowen Wang, et al.
0

Interpreting and explaining the behavior of deep neural networks is critical for many tasks. Explainable AI provides a way to address this challenge, mostly by providing per-pixel relevance to the decision. Yet, interpreting such explanations may require expert knowledge. Some recent attempts toward interpretability adopt a concept-based framework, giving a higher-level relationship between some concepts and model decisions. This paper proposes Bottleneck Concept Learner (BotCL), which represents an image solely by the presence/absence of concepts learned through training over the target task without explicit supervision over the concepts. It uses self-supervision and tailored regularizers so that learned concepts can be human-understandable. Using some image classification tasks as our testbed, we demonstrate BotCL's potential to rebuild neural networks for better interpretability. Code is available at https://github.com/wbw520/BotCL and a simple demo is available at https://botcl.liangzhili.com/.

READ FULL TEXT

page 1

page 3

page 6

page 7

research
05/28/2019

EDUCE: Explaining model Decisions through Unsupervised Concepts Extraction

With the advent of deep neural networks, some research focuses towards u...
research
08/14/2020

Abstracting Deep Neural Networks into Concept Graphs for Concept Level Interpretability

The black-box nature of deep learning models prevents them from being co...
research
11/03/2020

MACE: Model Agnostic Concept Extractor for Explaining Image Classification Networks

Deep convolutional networks have been quite successful at various image ...
research
03/30/2020

Architecture Disentanglement for Deep Neural Networks

Deep Neural Networks (DNNs) are central to deep learning, and understand...
research
06/28/2023

Time Regularization in Optimal Time Variable Learning

Recently, optimal time variable learning in deep neural networks (DNNs) ...
research
05/01/2021

A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts

Despite substantial progress in applying neural networks (NN) to a wide ...
research
10/07/2022

TCNL: Transparent and Controllable Network Learning Via Embedding Human-Guided Concepts

Explaining deep learning models is of vital importance for understanding...

Please sign up or login with your details

Forgot password? Click here to reset