Registration for ICASSP is free of charge, but registration is required to view the videos. If you have not yet registered, please visit: https://cmsworkshops.com/ICASSP2020/Registration.asp.Access the full virtual conference by visiting: https://2020.ieeeicassp-virtual.org/attendee/login. Your username is your email address and your password is your confirmation number/registration ID.

You need an account to view media

Sign in to view media

Don't have an account? Please contact us to request an account.

Machine Learning for Signal Processing
MLSP-L7.5
Lecture
Machine Learning Applications III

IMPROVING SINGING VOICE SEPARATION WITH THE WAVE-U-NET USING MINIMUM HYPERSPHERICAL ENERGY

Joaquin Perez-Lapillo

Date & Time

Thu, May 7, 2020

5:30 pm – 7:30 pm

Location

On-Demand

Abstract

In recent years, deep learning has surpassed traditional approaches to the problem of singing voice separation. The Wave-U-Net is a recent deep network architecture that operates directly on the time domain. The standard Wave-U-Net is trained with data augmentation and early stopping to prevent overfitting. Minimum hyperspherical energy (MHE) regularisation has recently proven to increase generalisation in image classification problems by encouraging a diversified filter configuration. In this work, we apply MHE regularisation to the 1D filters of the Wave-U-Net. We evaluated this approach for separating the vocal part from mixed music audio recordings on the MUSDB18 dataset. We found that adding MHE regularisation to the loss function consistently improves singing voice separation, as measured in the Signal to Distortion Ratio (SDR) on test recordings, leading to the best time-domain system for singing voice extraction.


Presenter

Joaquin Perez-Lapillo

City, University of London
Sign in to join the conversationDon't have an account? Please contact us to request an account.
Sign in to view documentsDon't have an account? Please contact us to request an account.

Session Chair

Tao Zhang

Starkey