ImageCaptioner^2: Image Captioner for Image Captioning Bias Amplification Assessment

04/10/2023
by   Eslam Mohamed Bakr, et al.
0

Most pre-trained learning systems are known to suffer from bias, which typically emerges from the data, the model, or both. Measuring and quantifying bias and its sources is a challenging task and has been extensively studied in image captioning. Despite the significant effort in this direction, we observed that existing metrics lack consistency in the inclusion of the visual signal. In this paper, we introduce a new bias assessment metric, dubbed ImageCaptioner^2, for image captioning. Instead of measuring the absolute bias in the model or the data, ImageCaptioner^2 pay more attention to the bias introduced by the model w.r.t the data bias, termed bias amplification. Unlike the existing methods, which only evaluate the image captioning algorithms based on the generated captions only, ImageCaptioner^2 incorporates the image while measuring the bias. In addition, we design a formulation for measuring the bias of generated captions as prompt-based image captioning instead of using language classifiers. Finally, we apply our ImageCaptioner^2 metric across 11 different image captioning architectures on three different datasets, i.e., MS-COCO caption dataset, Artemis V1, and Artemis V2, and on three different protected attributes, i.e., gender, race, and emotions. Consequently, we verify the effectiveness of our ImageCaptioner^2 metric by proposing AnonymousBench, which is a novel human evaluation paradigm for bias metrics. Our metric shows significant superiority over the recent bias metric; LIC, in terms of human alignment, where the correlation scores are 80 code is available at https://eslambakr.github.io/imagecaptioner2.github.io/.

READ FULL TEXT
research
03/29/2022

Quantifying Societal Bias Amplification in Image Captioning

We study societal bias amplification in image captioning. Image captioni...
research
12/22/2015

Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels

When human annotators are given a choice about what to label in an image...
research
12/02/2019

Exposing and Correcting the Gender Bias in Image Captioning Datasets and Models

The task of image captioning implicitly involves gender identification. ...
research
10/10/2021

Can Audio Captions Be Evaluated with Image Caption Metrics?

Automated audio captioning aims at generating textual descriptions for a...
research
06/15/2020

Mitigating Gender Bias in Captioning Systems

Image captioning has made substantial progress with huge supporting imag...
research
05/31/2016

Attention Correctness in Neural Image Captioning

Attention mechanisms have recently been introduced in deep learning for ...
research
04/30/2018

Improved Image Captioning with Adversarial Semantic Alignment

In this paper we propose a new conditional GAN for image captioning that...

Please sign up or login with your details

Forgot password? Click here to reset