EDGE: Editable Dance Generation From Music

11/19/2022
by   Jonathan Tseng, et al.
0

Dance is an important human art form, but creating new dances can be difficult and time-consuming. In this work, we introduce Editable Dance GEneration (EDGE), a state-of-the-art method for editable dance generation that is capable of creating realistic, physically-plausible dances while remaining faithful to the input music. EDGE uses a transformer-based diffusion model paired with Jukebox, a strong music feature extractor, and confers powerful editing capabilities well-suited to dance, including joint-wise conditioning, and in-betweening. We introduce a new metric for physical plausibility, and evaluate dance quality generated by our method extensively through (1) multiple quantitative metrics on physical plausibility, beat alignment, and diversity benchmarks, and more importantly, (2) a large-scale user study, demonstrating a significant improvement over previous state-of-the-art methods. Qualitative samples from our model can be found at our website.

READ FULL TEXT

page 1

page 4

page 5

page 11

page 14

page 15

research
11/21/2022

Video Background Music Generation: Dataset, Method and Evaluation

Music is essential when editing videos, but selecting music manually is ...
research
08/05/2023

DiffDance: Cascaded Human Motion Diffusion Model for Dance Generation

When hearing music, it is natural for people to dance to its rhythm. Aut...
research
12/08/2022

MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis

Conventional methods for human motion synthesis are either deterministic...
research
03/30/2022

Symbolic music generation conditioned on continuous-valued emotions

In this paper we present a new approach for the generation of multi-inst...
research
02/09/2023

ERNIE-Music: Text-to-Waveform Music Generation with Diffusion Models

In recent years, there has been an increased popularity in image and spe...
research
11/25/2021

A-Muze-Net: Music Generation by Composing the Harmony based on the Generated Melody

We present a method for the generation of Midi files of piano music. The...
research
09/04/2023

MDSC: Towards Evaluating the Style Consistency Between Music and

We propose MDSC(Music-Dance-Style Consistency), the first evaluation met...

Please sign up or login with your details

Forgot password? Click here to reset