Latent Image Animator: Learning to Animate Images via Latent Space Navigation

by   Yaohui Wang, et al.

Due to the remarkable progress of deep generative models, animating images has become increasingly efficient, whereas associated results have become increasingly realistic. Current animation-approaches commonly exploit structure representation extracted from driving videos. Such structure representation is instrumental in transferring motion from driving videos to still images. However, such approaches fail in case the source image and driving video encompass large appearance variation. Moreover, the extraction of structure information requires additional modules that endow the animation-model with increased complexity. Deviating from such models, we here introduce the Latent Image Animator (LIA), a self-supervised autoencoder that evades need for structure representation. LIA is streamlined to animate images by linear navigation in the latent space. Specifically, motion in generated video is constructed by linear displacement of codes in the latent space. Towards this, we learn a set of orthogonal motion directions simultaneously, and use their linear combination, in order to represent any displacement in the latent space. Extensive quantitative and qualitative analysis suggests that our model systematically and significantly outperforms state-of-art methods on VoxCeleb, Taichi and TED-talk datasets w.r.t. generated quality.


page 1

page 4

page 7

page 9

page 14

page 15

page 16

page 17


InMoDeGAN: Interpretable Motion Decomposition Generative Adversarial Network for Video Generation

In this work, we introduce an unconditional video generative model, InMo...

Autoencoding Video Latents for Adversarial Video Generation

Given the three dimensional complexity of a video signal, training a rob...

Self-Supervised Annotation of Seismic Images using Latent Space Factorization

Annotating seismic data is expensive, laborious and subjective due to th...

Learning a Compact State Representation for Navigation Tasks by Autoencoding 2D-Lidar Scans

In this paper, we address the problem of generating a compact representa...

Feature2Mass: Visual Feature Processing in Latent Space for Realistic Labeled Mass Generation

This paper deals with a method for generating realistic labeled masses. ...

NeuWigs: A Neural Dynamic Model for Volumetric Hair Capture and Animation

The capture and animation of human hair are two of the major challenges ...

Deep Meditations: Controlled navigation of latent space

We introduce a method which allows users to creatively explore and navig...

Please sign up or login with your details

Forgot password? Click here to reset