End-to-End Modeling via Information Tree for One-Shot Natural Language Spatial Video Grounding

03/15/2022
by   Mengze Li, et al.
0

Natural language spatial video grounding aims to detect the relevant objects in video frames with descriptive sentences as the query. In spite of the great advances, most existing methods rely on dense video frame annotations, which require a tremendous amount of human effort. To achieve effective grounding under a limited annotation budget, we investigate one-shot video grounding, and learn to ground natural language in all video frames with solely one frame labeled, in an end-to-end manner. One major challenge of end-to-end one-shot video grounding is the existence of videos frames that are either irrelevant to the language query or the labeled frames. Another challenge relates to the limited supervision, which might result in ineffective representation learning. To address these challenges, we designed an end-to-end model via Information Tree for One-Shot video grounding (IT-OS). Its key module, the information tree, can eliminate the interference of irrelevant frames based on branch search and branch cropping techniques. In addition, several self-supervised tasks are proposed based on the information tree to improve the representation learning under insufficient labeling. Experiments on the benchmark dataset demonstrate the effectiveness of our model.

READ FULL TEXT

page 1

page 8

research
01/05/2023

Hypotheses Tree Building for One-Shot Temporal Sentence Localization

Given an untrimmed video, temporal sentence localization (TSL) aims to l...
research
04/07/2020

Dense Regression Network for Video Grounding

We address the problem of video grounding from natural language queries....
research
12/13/2019

Grounding-Tracking-Integration

In this paper, we study tracking by language that localizes the target b...
research
06/28/2023

SpotEM: Efficient Video Search for Episodic Memory

The goal in episodic memory (EM) is to search a long egocentric video to...
research
03/14/2011

Sparse Transfer Learning for Interactive Video Search Reranking

Visual reranking is effective to improve the performance of the text-bas...
research
03/14/2023

You Can Ground Earlier than See: An Effective and Efficient Pipeline for Temporal Sentence Grounding in Compressed Videos

Given an untrimmed video, temporal sentence grounding (TSG) aims to loca...
research
12/08/2018

Explainability by Parsing: Neural Module Tree Networks for Natural Language Visual Grounding

Grounding natural language in images essentially requires composite visu...

Please sign up or login with your details

Forgot password? Click here to reset