Joint Semantic Knowledge Distillation and Masked Acoustic Modeling for Full-band Speech Restoration with Improved Intelligibility

Xiaoyu Liu, Xu Li, Joan Serrà, Santiago Pascual

This is the demonstration page of the paper “Joint Semantic Knowledge Distillation and Masked Acoustic Modeling for Full-band Speech Restoration with Improved Intelligibility” with samples generated with the proposed method and some other baseline methods.

Abstract

Speech restoration aims at restoring full-band speech with high quality and intelligibility, considering a diverse set of distortions. MaskSR is a recently proposed generative model for this task. As other models of its kind, MaskSR attains high quality but, as we show, intelligibility can be substantially improved. We do so by boosting the speech encoder component of MaskSR with predictions of semantic representations of the target speech, using a pre-trained self-supervised teacher model. Then, a masked language model is conditioned on the learned semantic features to predict acoustic tokens that encode low level spectral details of the target speech. We show that, with the same MaskSR model capacity and inference time, the proposed model, MaskSR2, significantly reduces the word error rate, a typical metric for intelligibility. MaskSR2 also achieves competitive word error rate among other models, while providing superior quality. An ablation study shows the effectiveness of various semantic representations.

Demos

Below, we show audio samples demonstrating how MaskSR2 performs on the full-band speech restoration task and several sub-tasks compared with some baseline methods.

Full-band 44.1 kHz speech restoration

Speech restoration with distortions including noise, reverb, clipping, and low bandwidth

MaskSR2 vs. MaskSR: Improved Intelligibility

These samples demonstrate that MaskSR2 improves the intelligibility of the generated speech compared with those of MaskSR. The semantic knowledge distillation preserves the phonetic content after removing the distortions.

Unprocessed Target Transcription MaskSR-L MaskSR2-L Improved words
Throughout the centuries, people have explained the rainbow. rainbow
I could not believe the results. I could not believe the results
I have so much respect for him. His friend for
The quality of life is difficult for them. life, difficult
However, the story has a happy ending. ending
It's not going to work. it's
I would not do that. do
And the other winners are. other

MaskSR2 vs. other full-band models

Unprocessed Target MaskSR-L MaskSR2-L VoiceFixer DeepFilterNet3

Bandwidth Extension

From 1 kHz to full bandwidth

Unprocessed Target MaskSR-L MaskSR2-L VoiceFixer DeepFilterNet3

From 2 kHz to full bandwidth

Unprocessed Target MaskSR-L MaskSR2-L VoiceFixer DeepFilterNet3

From 4 kHz to full bandwidth

Unprocessed Target MaskSR-L MaskSR2-L VoiceFixer DeepFilterNet3

Declipping

Clipping threshold 0.1

Unprocessed Target MaskSR-L MaskSR2-L VoiceFixer DeepFilterNet3

Clipping threshold 0.25

Unprocessed Target MaskSR-L MaskSR2-L VoiceFixer DeepFilterNet3

Wideband 16 kHz speech denoising

DNS-2020 no_reverb test samples

Unprocessed Target MaskSR-L MaskSR2-L FRCRN DEMUCS

DNS-2020 real_recordings test samples

Unprocessed MaskSR-L MaskSR2-L FRCRN DEMUCS