Concadia: Tackling image accessibility with context

04/16/2021
by   Elisa Kreiss, et al.
0

Images have become an integral part of online media. This has enhanced self-expression and the dissemination of knowledge, but it poses serious accessibility challenges. Adequate textual descriptions are rare. Captions are more abundant, but they do not consistently provide the needed descriptive details, and systems trained on such texts inherit these shortcomings. To address this, we introduce the publicly available Wikipedia-based corpus Concadia, which consists of 96,918 images with corresponding English-language descriptions, captions, and surrounding context. We use Concadia to further characterize the commonalities and differences between descriptions and captions, and this leads us to the hypothesis that captions, while not substitutes for descriptions, can provide a useful signal for creating effective descriptions. We substantiate this hypothesis by showing that image captioning systems trained on Concadia benefit from having caption embeddings as part of their inputs. These experiments also begin to show how Concadia can be a powerful tool in addressing the underlying accessibility issues posed by image data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset