Rekall: Specifying Video Events using Compositions of Spatiotemporal Labels

10/07/2019
by   Daniel Y. Fu, et al.
37

Many real-world video analysis applications require the ability to identify domain-specific events in video, such as interviews and commercials in TV news broadcasts, or action sequences in film. Unfortunately, pre-trained models to detect all the events of interest in video may not exist, and training new models from scratch can be costly and labor-intensive. In this paper, we explore the utility of specifying new events in video in a more traditional manner: by writing queries that compose outputs of existing, pre-trained models. To write these queries, we have developed Rekall, a library that exposes a data model and programming model for compositional video event specification. Rekall represents video annotations from different sources (object detectors, transcripts, etc.) as spatiotemporal labels associated with continuous volumes of spacetime in a video, and provides operators for composing labels into queries that model new video events. We demonstrate the use of Rekall in analyzing video from cable TV news broadcasts, films, static-camera vehicular video streams, and commercial autonomous vehicle logs. In these efforts, domain experts were able to quickly (in a few hours to a day) author queries that enabled the accurate detection of new events (on par with, and in some cases much more accurate than, learned approaches) and to rapidly retrieve video clips for human-in-the-loop tasks such as video content curation and training data curation. Finally, in a user study, novice users of Rekall were able to author queries to retrieve new events in video given just one hour of query development time.

READ FULL TEXT

page 6

page 9

page 10

page 11

page 12

page 13

page 14

page 16

research
07/15/2020

VidCEP: Complex Event Processing Framework to Detect Spatiotemporal Patterns in Video Streams

Video data is highly expressive and has traditionally been very difficul...
research
04/16/2020

Continuous Health Interface Event Retrieval

Knowing the state of our health at every moment in time is critical for ...
research
01/03/2023

EQUI-VOCAL: Synthesizing Queries for Compositional Video Events from Limited User Interactions [Technical Report]

We introduce EQUI-VOCAL: a new system that automatically synthesizes que...
research
01/10/2022

Multi-query Video Retrieval

Retrieving target videos based on text descriptions is a task of great p...
research
08/13/2020

Analyzing Who and What Appears in a Decade of US Cable TV News

Cable TV news reaches millions of U.S. households each day, meaning that...
research
09/21/2021

Audio Interval Retrieval using Convolutional Neural Networks

Modern streaming services are increasingly labeling videos based on thei...
research
05/25/2017

Extraction and Classification of Diving Clips from Continuous Video Footage

Due to recent advances in technology, the recording and analysis of vide...

Please sign up or login with your details

Forgot password? Click here to reset