MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training

by   Yizhi Li, et al.

Self-supervised learning (SSL) has recently emerged as a promising paradigm for training generalisable models on large-scale data in the fields of vision, text, and speech. Although SSL has been proven effective in speech and audio, its application to music audio has yet to be thoroughly explored. This is primarily due to the distinctive challenges associated with modelling musical knowledge, particularly its tonal and pitched characteristics of music. To address this research gap, we propose an acoustic Music undERstanding model with large-scale self-supervised Training (MERT), which incorporates teacher models to provide pseudo labels in the masked language modelling (MLM) style acoustic pre-training. In our exploration, we identified a superior combination of teacher models, which outperforms conventional speech and audio approaches in terms of performance. This combination includes an acoustic teacher based on Residual Vector Quantization - Variational AutoEncoder (RVQ-VAE) and a musical teacher based on the Constant-Q Transform (CQT). These teachers effectively guide our student model, a BERT-style transformer encoder, to better model music audio. In addition, we introduce an in-batch noise mixture augmentation to enhance the representation robustness. Furthermore, we explore a wide range of settings to overcome the instability in acoustic language model pre-training, which allows our designed paradigm to scale from 95M to 330M parameters. Experimental results indicate that our model can generalise and perform well on 14 music understanding tasks and attains state-of-the-art (SOTA) overall scores. The code and models are online:


page 1

page 2

page 3

page 4


On the Effectiveness of Speech Self-supervised Learning for Music

Self-supervised learning (SSL) has shown promising results in various sp...

S3T: Self-Supervised Pre-training with Swin Transformer for Music Classification

In this paper, we propose S3T, a self-supervised pre-training method wit...

Self-supervised Audio Teacher-Student Transformer for Both Clip-level and Frame-level Tasks

In recent years, self-supervised learning (SSL) has emerged as a popular...

MusiCoder: A Universal Music-Acoustic Encoder Based on Transformers

Music annotation has always been one of the critical topics in the field...

Supervised and Unsupervised Learning of Audio Representations for Music Understanding

In this work, we provide a broad comparative analysis of strategies for ...

Towards Cross-Cultural Analysis using Music Information Dynamics

A music piece is both comprehended hierarchically, from sonic events to ...

End-to-end Music Remastering System Using Self-supervised and Adversarial Training

Mastering is an essential step in music production, but it is also a cha...

Please sign up or login with your details

Forgot password? Click here to reset