MusicJam: Visualizing Music Insights via Generated Narrative Illustrations

by   Chuer Chen, et al.

Visualizing the insights of the invisible music is able to bring listeners an enjoyable and immersive listening experience, and therefore has attracted much attention in the field of information visualization. Over the past decades, various music visualization techniques have been introduced. However, most of them are manually designed by following the visual encoding rules, thus shown in form of a graphical visual representation whose visual encoding schema is usually taking effort to understand. Recently, some researchers use figures or illustrations to represent music moods, lyrics, and musical features, which are more intuitive and attractive. However, in these techniques, the figures are usually pre-selected or statically generated, so they cannot precisely convey insights of different pieces of music. To address this issue, in this paper, we introduce MusicJam, a music visualization system that is able to generate narrative illustrations to represent the insight of the input music. The system leverages a novel generation model designed based on GPT-2 to generate meaningful lyrics given the input music and then employs the stable diffusion model to transform the lyrics into coherent illustrations. Finally, the generated results are synchronized and rendered as an MP4 video accompanied by the input music. We evaluated the proposed lyric generation model by comparing it to the baseline models and conducted a user study to estimate the quality of the generated illustrations and the final music videos. The results showed the power of our technique.


page 1

page 3

page 7

page 8

page 9


Augmenting Sheet Music with Rhythmic Fingerprints

In this paper, we bridge the gap between visualization and musicology by...

Generative Disco: Text-to-Video Generation for Music Visualization

Visuals are a core part of our experience of music, owing to the way the...

Listen to Dance: Music-driven choreography generation using Autoregressive Encoder-Decoder Network

Automatic choreography generation is a challenging task because it often...

Music Generation with Temporal Structure Augmentation

In this paper we introduce a novel feature augmentation approach for gen...

Melody Infilling with User-Provided Structural Context

This paper proposes a novel Transformer-based model for music score infi...

An Artistic Visualization of Music Modeling a Synesthetic Experience

This project brings music to sight. Music can be a visual masterpiece. S...

Glyph from Icon – Automated Generation of Metaphoric Glyphs

Metaphoric glyphs enhance the readability and learnability of abstract g...

Please sign up or login with your details

Forgot password? Click here to reset