Multi-Modal Music Information Retrieval: Augmenting Audio-Analysis with Visual Computing for Improved Music Video Analysis

by   Alexander Schindler, et al.

This thesis combines audio-analysis with computer vision to approach Music Information Retrieval (MIR) tasks from a multi-modal perspective. This thesis focuses on the information provided by the visual layer of music videos and how it can be harnessed to augment and improve tasks of the MIR research domain. The main hypothesis of this work is based on the observation that certain expressive categories such as genre or theme can be recognized on the basis of the visual content alone, without the sound being heard. This leads to the hypothesis that there exists a visual language that is used to express mood or genre. In a further consequence it can be concluded that this visual information is music related and thus should be beneficial for the corresponding MIR tasks such as music genre classification or mood recognition. A series of comprehensive experiments and evaluations are conducted which are focused on the extraction of visual information and its application in different MIR tasks. A custom dataset is created, suitable to develop and test visual features which are able to represent music related information. Evaluations range from low-level visual features to high-level concepts retrieved by means of Deep Convolutional Neural Networks. Additionally, new visual features are introduced capturing rhythmic visual patterns. In all of these experiments the audio-based results serve as benchmark for the visual and audio-visual approaches. The experiments are conducted for three MIR tasks Artist Identification, Music Genre Classification and Cross-Genre Classification. Experiments show that an audio-visual approach harnessing high-level semantic information gained from visual concept detection, outperforms audio-only genre-classification accuracy by 16.43


page 1

page 3

page 14

page 16

page 19

page 22

page 40


Multi-Modality in Music: Predicting Emotion in Music from High-Level Audio Features and Lyrics

This paper aims to test whether a multi-modal approach for music emotion...

Learning Unsupervised Hierarchies of Audio Concepts

Music signals are difficult to interpret from their low-level features, ...

Novel Recording Studio Features for Music Information Retrieval

In the recording studio, producers of Electronic Dance Music (EDM) spend...

Audio Concept Classification with Hierarchical Deep Neural Networks

Audio-based multimedia retrieval tasks may identify semantic information...

The "Horse" Inside: Seeking Causes Behind the Behaviours of Music Content Analysis Systems

Building systems that possess the sensitivity and intelligence to identi...

libACA, pyACA, and ACA-Code: Audio Content Analysis in 3 Languages

The three packages libACA, pyACA, and ACA-Code provide reference impleme...

OK Computer Analysis: An Audio Corpus Study of Radiohead

The application of music information retrieval techniques in popular mus...

Please sign up or login with your details

Forgot password? Click here to reset