Spectrogram Feature Losses for Music Source Separation

 

In this paper we study deep learning-based music source separation, and explore using an alternative loss to the standard spectrogram pixel-level L2 loss for model training.

September 2, 2019
Eusipco 2019

 

Authors

Abhimanyu Sahai (Disney Research/ETH Joint M.Sc.)

Romann Weber (Disney Research)

Brian McWilliams (Disney Research)

Spectrogram Feature Losses for Music Source Separation

Abstract

TOur main contribution is in demonstrating that adding a high-level feature loss term, extracted from the spectrograms using a VGG net, can improve separation quality vis-a-vis a pure pixel-level loss. We show this improvement in the context of the MMDenseNet, a State-of-the-Art deep learning model for this task, for the extraction of drums and vocal sounds from songs in the musdb18 database, covering a broad range of western music genres. We believe that this finding can be generalized and applied to broader machine learning-based systems in the audio domain.

Copyright Notice