Speech Resynthesis from Discrete
Disentangled Self-Supervised Representations

Adam Polyak, Yossi Adi, Jade Copet, Eugene Kharitonov, Kushal Lakhotia,
Wei-Ning Hsu, Abdelrahman Mohamed, Emmanuel Dupoux

In Proceedings of Interspeech 2021

[Paper]

We propose using self-supervised discrete representations for the task of speech resynthesis. To generate disentangled representation, we separately extract low-bitrate representations for speech content, prosodic information, and speaker identity. This allows to synthesize speech in a controllable manner. We analyze various state-of-the-art, self-supervised representation learning methods and shed light on the advantages of each method while considering reconstruction quality and disentanglement properties. Specifically, we evaluate the F0 reconstruction, speaker identification performance (for both resynthesis and voice conversion), recordings' intelligibility, and overall quality using subjective human evaluation. Lastly, we demonstrate how these representations can be used for an ultra-lightweight speech codec. Using the obtained representations, we can get to a rate of 365 bits per second while providing better speech quality than the baseline methods.


Example
Original
Method # of Units Resynthesis Conversion to p258 Conversion to p343 Conversion to p292
HuBERT 100
CPC 100
VQ-VAE 256
Samples
Loading......
Sample paged based on HiFi-GAN page.