Ultra-light deep MIR by trimming lottery tickets

07/31/2020
by   Philippe Esling, et al.
0

Current state-of-the-art results in Music Information Retrieval are largely dominated by deep learning approaches. These provide unprecedented accuracy across all tasks. However, the consistently overlooked downside of these models is their stunningly massive complexity, which seems concomitantly crucial to their success. In this paper, we address this issue by proposing a model pruning method based on the lottery ticket hypothesis. We modify the original approach to allow for explicitly removing parameters, through structured trimming of entire units, instead of simply masking individual weights. This leads to models which are effectively lighter in terms of size, memory and number of operations. We show that our proposal can remove up to 90 model parameters without loss of accuracy, leading to ultra-light deep MIR models. We confirm the surprising result that, at smaller compression ratios (removing up to 85 heavier counterparts. We exhibit these results on a large array of MIR tasks including audio classification, pitch recognition, chord extraction, drum transcription and onset estimation. The resulting ultra-light deep learning models for MIR can run on CPU, and can even fit on embedded devices with minimal degradation of accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/31/2020

Diet deep generative audio models with structured lottery

Deep learning models have provided extremely successful solutions in mos...
research
04/25/2023

Optimizing Deep Learning Models For Raspberry Pi

Deep learning models have become increasingly popular for a wide range o...
research
10/05/2017

To prune, or not to prune: exploring the efficacy of pruning for model compression

Model pruning seeks to induce sparsity in a deep neural network's variou...
research
02/02/2022

Melody Extraction from Polyphonic Music by Deep Learning Approaches: A Review

Melody extraction is a vital music information retrieval task among musi...
research
07/06/2019

AutoSlim: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates

Structured weight pruning is a representative model compression techniqu...
research
11/30/2019

Pruning at a Glance: Global Neural Pruning for Model Compression

Deep Learning models have become the dominant approach in several areas ...
research
01/26/2022

Auto-Compressing Subset Pruning for Semantic Image Segmentation

State-of-the-art semantic segmentation models are characterized by high ...

Please sign up or login with your details

Forgot password? Click here to reset