Presentation # | 3 |
Session: | Voice Conversion and TTS |
Session Time: | Friday, December 21, 10:00 - 12:00 |
Presentation Time: | Friday, December 21, 10:00 - 12:00 |
Presentation: |
Poster
|
Topic: |
Special session on Speech Synthesis: |
Paper Title: |
Adaptive WaveNet Vocoder for Residual Compensation in GAN-based Voice Conversion |
Authors: |
Berrak Sisman; National University of Singapore | | |
| Mingyang Zhang; National University of Singapore | | |
| Sakriani Sakti; Nara Institute of Science and Technology | | |
| Haizhou Li; National University of Singapore | | |
| Satoshi Nakamura; Nara Institute of Science and Technology | | |
Abstract: |
In this paper, we propose to use generative adversarial networks (GAN) together with a WaveNet vocoder to address the over-smoothing problem arising from the deep learning approaches to voice conversion, and to improve the vocoding quality over the traditional vocoders. As GAN aims to minimize the divergence between the natural and converted speech parameters, it effectively alleviates the over-smoothing problem in the converted speech. On the other hand, WaveNet vocoder allows us to leverage from the human speech of a large speaker population, thus improving the naturalness of the synthetic voice. Furthermore, for the first time, we study how to use WaveNet vocoder for residual compensation to improve the voice conversion performance. The experiments show that the proposed voice conversion framework consistently outperforms the baselines. |