VIOLIN: A Large-Scale Dataset for Video-and-Language Inference

03/25/2020
by   Jingzhou Liu, et al.
17

We introduce a new task, Video-and-Language Inference, for joint multimodal understanding of video and text. Given a video clip with aligned subtitles as premise, paired with a natural language hypothesis based on the video content, a model needs to infer whether the hypothesis is entailed or contradicted by the given video clip. A new large-scale dataset, named Violin (VIdeO-and-Language INference), is introduced for this task, which consists of 95,322 video-hypothesis pairs from 15,887 video clips, spanning over 582 hours of video. These video clips contain rich content with diverse temporal dynamics, event shifts, and people interactions, collected from two sources: (i) popular TV shows, and (ii) movie clips from YouTube channels. In order to address our new multimodal inference task, a model is required to possess sophisticated reasoning skills, from surface-level grounding (e.g., identifying objects and characters in the video) to in-depth commonsense reasoning (e.g., inferring causal relations of events in the video). We present a detailed analysis of the dataset and an extensive evaluation over many strong baselines, providing valuable insights on the challenges of this new task.

READ FULL TEXT

page 2

page 4

page 8

page 12

page 16

page 17

page 18

research
10/15/2020

What is More Likely to Happen Next? Video-and-Language Future Event Prediction

Given a video with aligned dialogue, people can often infer what is more...
research
01/24/2020

TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval

We introduce a new multimodal retrieval task - TV show Retrieval (TVR), ...
research
07/26/2021

Adaptive Hierarchical Graph Reasoning with Semantic Coherence for Video-and-Language Inference

Video-and-Language Inference is a recently proposed task for joint video...
research
03/26/2022

Visual Abductive Reasoning

Abductive reasoning seeks the likeliest possible explanation for partial...
research
06/14/2022

Multimodal Event Graphs: Towards Event Centric Understanding of Multimodal World

Understanding how events described or shown in multimedia content relate...
research
03/03/2015

Using Descriptive Video Services to Create a Large Data Source for Video Annotation Research

In this work, we introduce a dataset of video annotated with high qualit...
research
07/26/2017

Video Highlight Prediction Using Audience Chat Reactions

Sports channel video portals offer an exciting domain for research on mu...

Please sign up or login with your details

Forgot password? Click here to reset