Image Comes Dancing with Collaborative Parsing-Flow Video Synthesis

by   Bowen Wu, et al.

Transferring human motion from a source to a target person poses great potential in computer vision and graphics applications. A crucial step is to manipulate sequential future motion while retaining the appearance characteristic.Previous work has either relied on crafted 3D human models or trained a separate model specifically for each target person, which is not scalable in practice.This work studies a more general setting, in which we aim to learn a single model to parsimoniously transfer motion from a source video to any target person given only one image of the person, named as Collaborative Parsing-Flow Network (CPF-Net). The paucity of information regarding the target person makes the task particularly challenging to faithfully preserve the appearance in varying designated poses. To address this issue, CPF-Net integrates the structured human parsing and appearance flow to guide the realistic foreground synthesis which is merged into the background by a spatio-temporal fusion module. In particular, CPF-Net decouples the problem into stages of human parsing sequence generation, foreground sequence generation and final video generation. The human parsing generation stage captures both the pose and the body structure of the target. The appearance flow is beneficial to keep details in synthesized frames. The integration of human parsing and appearance flow effectively guides the generation of video frames with realistic appearance. Finally, the dedicated designed fusion network ensure the temporal coherence. We further collect a large set of human dancing videos to push forward this research field. Both quantitative and qualitative results show our method substantially improves over previous approaches and is able to generate appealing and photo-realistic target videos given any input person image. All source code and dataset will be released at


page 1

page 2

page 4

page 7

page 8

page 9

page 10

page 11


Dance Dance Generation: Motion Transfer for Internet Videos

This work presents computational methods for transferring body movements...

Multi-Frame Content Integration with a Spatio-Temporal Attention Mechanism for Person Video Motion Transfer

Existing person video generation methods either lack the flexibility in ...

Location-Free Camouflage Generation Network

Camouflage is a common visual phenomenon, which refers to hiding the for...

Copy Motion From One to Another: Fake Motion Video Generation

One compelling application of artificial intelligence is to generate a v...

DisCo: Disentangled Control for Referring Human Dance Generation in Real World

Generative AI has made significant strides in computer vision, particula...

Single-Shot Freestyle Dance Reenactment

The task of motion transfer between a source dancer and a target person ...

Robust Pose Transfer with Dynamic Details using Neural Video Rendering

Pose transfer of human videos aims to generate a high fidelity video of ...

Please sign up or login with your details

Forgot password? Click here to reset