Video Reenactment as Inductive Bias for Content-Motion Disentanglement

We introduce a self-supervised motion-transfer VAE model to disentangle motion and content from video. Unlike previous work regarding content-motion disentanglement in videos, we adopt a chunk-wise modeling approach and take advantage of the motion information contained in spatiotemporal neighborhoods. Our model yields per-chunk representations that can be modeled independently and preserve temporal consistency. Hence, we reconstruct whole videos in a single forward-pass. We extend the ELBO's log-likelihood term and include a Blind Reenactment Loss as inductive bias to leverage motion disentanglement, under the assumption that swapping motion features yields reenactment between two videos. We test our model on recently-proposed disentanglement metrics, and show that it outperforms a variety of methods for video motion-content disentanglement. Experiments on video reenactment show the effectiveness of our disentanglement in the input space where our model outperforms the baselines in reconstruction quality and motion alignment.

READ FULL TEXT

page 6

page 7

page 16

page 17

page 18

page 21

page 22

page 23

research
02/26/2021

Dual-MTGAN: Stochastic and Deterministic Motion Transfer for Image-to-Video Synthesis

Generating videos with content and motion variations is a challenging ta...
research
04/13/2018

MSnet: Mutual Suppression Network for Disentangled Video Representations

The extraction of meaningful features from videos is important as they c...
research
10/12/2022

M^3Video: Masked Motion Modeling for Self-Supervised Video Representation Learning

We study self-supervised video representation learning that seeks to lea...
research
06/07/2022

Generating Long Videos of Dynamic Scenes

We present a video generation model that accurately reproduces object mo...
research
08/16/2023

Dual-Stream Diffusion Net for Text-to-Video Generation

With the emerging diffusion models, recently, text-to-video generation h...
research
06/25/2022

SLIC: Self-Supervised Learning with Iterative Clustering for Human Action Videos

Self-supervised methods have significantly closed the gap with end-to-en...
research
04/13/2023

DNeRV: Modeling Inherent Dynamics via Difference Neural Representation for Videos

Existing implicit neural representation (INR) methods do not fully explo...

Please sign up or login with your details

Forgot password? Click here to reset