Temporal Interlacing Network

01/17/2020
by   Hao Shao, et al.
17

For a long time, the vision community tries to learn the spatio-temporal representation by combining convolutional neural network together with various temporal models, such as the families of Markov chain, optical flow, RNN and temporal convolution. However, these pipelines consume enormous computing resources due to the alternately learning process for spatial and temporal information. One natural question is whether we can embed the temporal information into the spatial one so the information in the two domains can be jointly learned once-only. In this work, we answer this question by presenting a simple yet powerful operator – temporal interlacing network (TIN). Instead of learning the temporal features, TIN fuses the two kinds of information by interlacing spatial representations from the past to the future, and vice versa. A differentiable interlacing target can be learned to control the interlacing process. In this way, a heavy temporal model is replaced by a simple interlacing operator. We theoretically prove that with a learnable interlacing target, TIN performs equivalently to the regularized temporal convolution network (r-TCN), but gains 4 6 challenging benchmarks. These results push the state-of-the-art performances of video understanding by a considerable margin. Not surprising, the ensemble model of the proposed TIN won the 1^st place in the ICCV19 - Multi Moments in Time challenge. Code is made available to facilitate further research at https://github.com/deepcs233/TIN

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2020

Deformable 3D Convolution for Video Super-Resolution

The spatio-temporal information among video sequences is significant for...
research
03/04/2019

Collaborative Spatio-temporal Feature Learning for Video Action Recognition

Spatio-temporal feature learning is of central importance for action rec...
research
11/21/2018

Locally-Consistent Deformable Convolution Networks for Fine-Grained Action Detection

Fine-grained action detection is an important task with numerous applica...
research
07/22/2023

Discovering Spatio-Temporal Rationales for Video Question Answering

This paper strives to solve complex video question answering (VideoQA) w...
research
08/07/2022

Exploring Long Short Range Temporal Information for Learned Video Compression

Learned video compression methods have gained a variety of interest in t...
research
06/18/2022

REVECA – Rich Encoder-decoder framework for Video Event CAptioner

We describe an approach used in the Generic Boundary Event Captioning ch...
research
10/12/2021

TAda! Temporally-Adaptive Convolutions for Video Understanding

Spatial convolutions are widely used in numerous deep video models. It f...

Please sign up or login with your details

Forgot password? Click here to reset