GSEP: A robust vocal and accompaniment separation system using gated CBHG module and loudness normalization
In the field of audio signal processing research, source separation has been a popular research topic for a long time and the recent adoption of the deep neural networks have shown a significant improvement in performance. The improvement vitalizes the industry to productize audio deep learning based products and services including Karaoke in the music streaming apps and dialogue enhancement in the UHDTV. For these early markets, we defined a set of design principles of the vocal and accompaniment separation model in terms of robustness, quality, and cost. In this paper, we introduce GSEP (Gaudio source SEParation system), a robust vocal and accompaniment separation system using a Gated- CBHG module, mask warping, and loudness normalization and it was verified that the proposed system satisfies all three principles and outperforms the state-of-the-art systems both in objective measure and subjective assessment through experiments.
READ FULL TEXT