Measuring the Interpretability of Unsupervised Representations via Quantized Reverse Probing

09/07/2022
by   Iro Laina, et al.
0

Self-supervised visual representation learning has recently attracted significant research interest. While a common way to evaluate self-supervised representations is through transfer to various downstream tasks, we instead investigate the problem of measuring their interpretability, i.e. understanding the semantics encoded in raw representations. We formulate the latter as estimating the mutual information between the representation and a space of manually labelled concepts. To quantify this we introduce a decoding bottleneck: information must be captured by simple predictors, mapping concepts to clusters in representation space. This approach, which we call reverse linear probing, provides a single number sensitive to the semanticity of the representation. This measure is also able to detect when the representation contains combinations of concepts (e.g., "red apple") instead of just individual attributes ("red" and "apple" independently). Finally, we propose to use supervised classifiers to automatically label large datasets in order to enrich the space of concepts used for probing. We use our method to evaluate a large number of self-supervised representations, ranking them by interpretability, highlight the differences that emerge compared to the standard evaluation with linear probes and discuss several qualitative insights. Code at: <https://github.com/iro-cp/ssl-qrp>.

READ FULL TEXT

page 9

page 22

page 23

page 24

page 25

research
03/03/2022

Understanding Failure Modes of Self-Supervised Learning

Self-supervised learning methods have shown impressive results in downst...
research
07/20/2023

Identifying Interpretable Subspaces in Image Representations

We propose Automatic Feature Explanation using Contrasting Concepts (FAL...
research
12/10/2020

Concept Generalization in Visual Representation Learning

Measuring concept generalization, i.e., the extent to which models train...
research
07/24/2022

Inter-model Interpretability: Self-supervised Models as a Case Study

Since early machine learning models, metrics such as accuracy and precis...
research
07/18/2022

ExAgt: Expert-guided Augmentation for Representation Learning of Traffic Scenarios

Representation learning in recent years has been addressed with self-sup...
research
10/27/2020

Quantifying Learnability and Describability of Visual Concepts Emerging in Representation Learning

The increasing impact of black box models, and particularly of unsupervi...
research
11/01/2021

Self-Supervised Radio-Visual Representation Learning for 6G Sensing

In future 6G cellular networks, a joint communication and sensing protoc...

Please sign up or login with your details

Forgot password? Click here to reset