Visual Commonsense in Pretrained Unimodal and Multimodal Models

05/04/2022
by   Chenyu Zhang, et al.
0

Our commonsense knowledge about objects includes their typical visual attributes; we know that bananas are typically yellow or green, and not purple. Text and image corpora, being subject to reporting bias, represent this world-knowledge to varying degrees of faithfulness. In this paper, we investigate to what degree unimodal (language-only) and multimodal (image and language) models capture a broad range of visually salient attributes. To that end, we create the Visual Commonsense Tests (ViComTe) dataset covering 5 property types (color, shape, material, size, and visual co-occurrence) for over 5000 subjects. We validate this dataset by showing that our grounded color data correlates much better than ungrounded text-only data with crowdsourced color judgments provided by Paik et al. (2021). We then use our dataset to evaluate pretrained unimodal models and multimodal models. Our results indicate that multimodal models better reconstruct attribute distributions, but are still subject to reporting bias. Moreover, increasing model size does not enhance performance, suggesting that the key to visual commonsense lies in the data.

READ FULL TEXT
research
05/14/2022

What do Models Learn From Training on More Than Text? Measuring Visual Commonsense Knowledge

There are limitations in learning language from text alone. Therefore, r...
research
10/15/2021

The World of an Octopus: How Reporting Bias Influences a Language Model's Perception of Color

Recent work has raised concerns about the inherent limitations of text-o...
research
09/13/2022

Visual Recipe Flow: A Dataset for Learning Visual State Changes of Objects with Recipe Flows

We present a new multimodal dataset called Visual Recipe Flow, which ena...
research
05/24/2023

ImageNetVC: Zero-Shot Visual Commonsense Evaluation on 1000 ImageNet Categories

Recently, Pretrained Language Models (PLMs) have been serving as general...
research
06/04/2023

Probing Physical Reasoning with Counter-Commonsense Context

In this study, we create a CConS (Counter-commonsense Contextual Size co...
research
09/15/2022

VIPHY: Probing "Visible" Physical Commonsense Knowledge

In recent years, vision-language models (VLMs) have shown remarkable per...
research
10/05/2016

VoxML: A Visualization Modeling Language

We present the specification for a modeling language, VoxML, which encod...

Please sign up or login with your details

Forgot password? Click here to reset