Paper ID | CHLG-3.5 |
Paper Title |
THE HUYA MULTI-SPEAKER AND MULTI-STYLE SPEECH SYNTHESIS SYSTEM FOR M2VOC CHALLENGE 2020 |
Authors |
Jie Wang, Tsinghua University, China; Yuren You, Feng Liu, Deyi Tuo, Shiyin Kang, Huya Inc, China; Zhiyong Wu, Tsinghua University, China; Helen Meng, The Chinese University of Hong Kong, China |
Session | CHLG-3: Multi-Speaker Multi-Style Voice Cloning Challenge (M2VoC) |
Location | Zoom |
Session Time: | Monday, 07 June, 15:30 - 17:45 |
Presentation Time: | Monday, 07 June, 15:30 - 17:45 |
Presentation |
Poster
|
Topic |
Grand Challenge: Multi-Speaker Multi-Style Voice Cloning Challenge (M2VoC) |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Text-to-speech systems now can generate speech that is hard to distinguish from human speech. In this paper, we propose the Huya multi-speaker and multi-style speech synthesis system which is based on DurIAN and HiFi-GAN to generate high-fidelity speech even under low-resource condition. We use the fine-grained linguistic representation which leverages the similarity in pronunciation between different languages and promotes the speech quality of code-switch speech synthesis. Our TTS system uses the HiFi-GAN as the neural vocoder which has higher synthesis stability for unseen speakers and can generate higher quality speech with noisy training data than WaveRNN in the challenge tasks. The model is trained on the datasets released by the organizer as well as CMU-ARCTIC, AIShell-1 and THCHS-30 as the external datasets and the results were evaluated by the organizer. We participated in all four tracks and three of them entered high score lists. The evaluation results show that our system outperforms the majority of all participating teams. |