Motion Selective Prediction for Video Frame Synthesis

12/25/2018
by   Veronique Prinet, et al.
16

Existing conditional video prediction approaches train a network from large databases and generalize to previously unseen data. We take the opposite stance, and introduce a model that learns from the first frames of a given video and extends its content and motion, to, eg, double its length. To this end, we propose a dual network that can use in a flexible way both dynamic and static convolutional motion kernels, to predict future frames. The construct of our model gives us the the means to efficiently analyze its functioning and interpret its output. We demonstrate experimentally the robustness of our approach on challenging videos in-the-wild and show that it is competitive wrt related baselines.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

page 8

page 13

research
02/21/2018

Stochastic Video Generation with a Learned Prior

Generating video frames that accurately predict future world states is c...
research
08/05/2021

SLAMP: Stochastic Latent Appearance and Motion Prediction

Motion is an important cue for video prediction and often utilized by se...
research
08/01/2017

Dual Motion GAN for Future-Flow Embedded Video Prediction

Future frame prediction in videos is a promising avenue for unsupervised...
research
03/06/2023

Polar Prediction of Natural Videos

Observer motion and continuous deformations of objects and surfaces imbu...
research
07/09/2016

Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks

We study the problem of synthesizing a number of likely future frames fr...
research
03/18/2021

Future Frame Prediction for Robot-assisted Surgery

Predicting future frames for robotic surgical video is an interesting, i...
research
05/15/2018

Topological Eulerian Synthesis of Slow Motion Periodic Videos

We consider the problem of taking a video that is comprised of multiple ...

Please sign up or login with your details

Forgot password? Click here to reset