FitVid: Overfitting in Pixel-Level Video Prediction

by   Mohammad Babaeizadeh, et al.

An agent that is capable of predicting what happens next can perform a variety of tasks through planning with no additional training. Furthermore, such an agent can internally represent the complex dynamics of the real-world and therefore can acquire a representation useful for a variety of visual perception tasks. This makes predicting the future frames of a video, conditioned on the observed past and potentially future actions, an interesting task which remains exceptionally challenging despite many recent advances. Existing video prediction models have shown promising results on simple narrow benchmarks but they generate low quality predictions on real-life datasets with more complicated dynamics or broader domain. There is a growing body of evidence that underfitting on the training data is one of the primary causes for the low quality predictions. In this paper, we argue that the inefficient use of parameters in the current video models is the main reason for underfitting. Therefore, we introduce a new architecture, named FitVid, which is capable of severe overfitting on the common benchmarks while having similar parameter count as the current state-of-the-art models. We analyze the consequences of overfitting, illustrating how it can produce unexpected outcomes such as generating high quality output by repeating the training data, and how it can be mitigated using existing image augmentation techniques. As a result, FitVid outperforms the current state-of-the-art models across four different video prediction benchmarks on four different metrics.


page 7

page 19

page 20

page 21

page 22

page 23

page 24

page 25


Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation

Video prediction is a challenging task. The quality of video frames from...

Action-conditioned Benchmarking of Robotic Video Prediction Models: a Comparative Study

A defining characteristic of intelligent systems is the ability to make ...

Fully Context-Aware Video Prediction

This paper proposes a new neural network design for unsupervised learnin...

Greedy Hierarchical Variational Autoencoders for Large-Scale Video Prediction

A video prediction model that generalizes to diverse scenes would enable...

Local Frequency Domain Transformer Networks for Video Prediction

Video prediction is commonly referred to as forecasting future frames of...

DYAN: A Dynamical Atoms Network for Video Prediction

The ability to anticipate the future is essential when making real time ...

Generative Modeling of Complex Data

In recent years, several models have improved the capacity to generate s...

Please sign up or login with your details

Forgot password? Click here to reset