Registration for ICASSP is free of charge, but registration is required to view the videos. If you have not yet registered, please visit: the full virtual conference by visiting: Your username is your email address and your password is your confirmation number/registration ID.

You need an account to view media

Sign in to view media

Don't have an account? Please contact us to request an account.

Speech Processing
Speech Separation and Extraction III


Zhaoheng Ni

Date & Time

Thu, May 7, 2020

12:30 pm – 2:30 pm




Speaker separation refers to isolating speech of interest in a multi-talker environment. Most methods apply real-valued Time-Frequency (T-F) masks to the mixture Short-Time Fourier Transform (STFT) to reconstruct the clean speech. Hence there is an unavoidable mismatch between the phase of the reconstruction and the original phase of the clean speech. In this paper, we propose a simple yet effective phase estimation network that predicts the phase of the clean speech based on a T-F mask predicted by a chimera++ network. To overcome the label-permutation problem for both the T-F mask and the phase, we propose a mask-dependent permutation invariant training (PIT) criterion to select the phase signal based on the loss from the T-F mask prediction. We also propose an Inverse Mask Weighted Loss Function for phase prediction to focus the model on the T-F regions in which the phase is more difficult to predict. Results on the WSJ0-2mix dataset show that the phase estimation network achieves comparable performance to models that use iterative phase reconstruction or end-to-end time-domain loss functions, but in a more straightforward manner.


Zhaoheng Ni

Graduate Center, City University of New York
Sign in to join the conversationDon't have an account? Please contact us to request an account.
Sign in to view documentsDon't have an account? Please contact us to request an account.

Session Chairs